Finding Palindromes in the Latin Library

article

A playful diversion for the morning: What is the longest palindrome in the Latin language? And secondarily, what are the most common? (Before we even check, it won’t be too much of a surprise that non takes the top spot. It is the only palindrome in the Top 10 Most Frequent Latin Words list.)

As with other experiments in this series, we will use the Latin Library as a corpus and let it be our lexical playground. In this post, I will post some comments about method and report results. The code itself, using the CLTK and the CLTK Latin Library corpus with Python3, is available in this notebook.

As far as method, this experiment is fairly straightforward. First, we import the Latin Library, preprocess it in the usual ways, tokenize the text, and remove tokens of less than 3 letters. Now that we have a list of tokens, we can look for palindromes. We can use Python’s text slice and negative step to create a test for palindromes. Something like this:

def is_palindrome(token):
    return token == token[::-1]

This function takes a token, makes a copy but reversing the order of the letters, and returns true if they match. At this point, we can filter our list of tokens using this test and report our results. So…

Drumroll, please—the most frequently occurring palindromes in the Latin language are:

non, 166078
esse, 49426
illi, 9922
ibi, 7155
ecce, 3662
tot, 3443
sumus, 2678
sis, 1526
usu, 1472
tenet, 1072

Second drumroll, please—the longest palindrome in the Latin language is Massinissam (11 letters!), the accusative form of Massinissa, the first king of Numidia. We find other proper names in the top spots for longest palindromes: Aballaba, a site long Hadrian’s Wall reported in the Notitia Dignitatum; Suillius, a 1st-cent. Roman politician; and the Senones, a Celtic tribe well known to us from Livy among others. The longest Latin palindrome that is not a proper name is the dative/ablative plural of the superlative for similis, namely simillimis (10 letters). Rounding out the top ten are: the accusative of sarabara, “wide trowsers,” namely sarabaras; the feminine genitive plural of muratus, “walled,” namely muratarum; the first-person plural imperfect subjunctive of sumere, that is sumeremus; the  dative/ablative of silvula, “a little wood”, namely silvulis (notice the u/v normalization though); and rotator, “one who turns a thing round in a circle, a whirler round,” as Lewis & Short define it.

Not much here other than a bit of Latin word trivia. But we see again that using a large corpus like The Latin Library with Python/CLTK, we can extract information about the language easily. This sort of casual experiment lays the foundation for similar work that could be used perhaps to look into questions of greater philological significance.

A closing note. Looking over the list of Latin palindromes, I think my favorite is probably mutatum, a word that means something has changed, but when reversed stays exactly the same.

 

 

Nuntii Latini: 2016 Year in Review

tutorial

Earlier this week, Radio Bremen announced that it would be discontinuing its Nuntii Latini Septimanales. As a weekly listener, I was disappointed by the news—luckily, the monthly broadcasts will continue. Where else can you read news stories about heros artis musicae mortuus, i.e. David Bowie, or Trump victor improvisus? Coincidentally, I learned about the fate of Septimanales while preparing a quick study of word usage in these weekly news broadcasts. So, as a tribute to the work of the Nuntii writers and as a follow up to the Latin Library word-frequency post from earlier this week, I present “Nuntii Latini: 2016 Year in Review”.

[A Jupyter Notebook with the code and complete lists of tokens and lemmas for this post is available here.]

A quick note about how I went about this work. To get the data, I collected a list of web pages from the “Archivum Septimanale” page and used the Python Requests package to get the html contents of each of the weekly posts. I then used Beautiful Soup to extract only the content of the three weekly stories that Radio Bremen publishes every week. Here is a sample of what I scraped from each page:

[['30.12.2016',
  'Impetus terroristicus Berolini factus',
  'Anis Amri, qui impetum terroristicum Berolini fecisse pro certo habetur, '
  'a custode publico prope urbem Mediolanum in fuga necatus est. In Tunisia, '
  'qua e civitate ille islamista ortus est, tres viri comprehensi sunt, in his '
  'nepos auctoris facinoris. Quos huic facinori implicatos esse suspicio est. '
  'Impetu media in urbe Berolino facto duodecim homines interfecti, '
  'quinquaginta tres graviter vulnerati erant.'],
 ['30.12.2016',
  'Plures Turci asylum petunt',
  'Numerus asylum petentium, qui e Turcia orti sunt, anno bis millesimo sexto '
  'decimo evidenter auctus est, ut a moderatoribus Germaniae nuntiatur. '
  'Circiter quattuor partes eorum sunt Cordueni. Post seditionem ad irritum '
  'redactam ii, qui Turciam regunt, magis magisque regimini adversantes '
  'opprimunt, imprimis Corduenos, qui in re publica versantur.'],
 ['30.12.2016',
  'Septimanales finiuntur',
  'A. d. XI Kal. Febr. anni bis millesimi decimi redactores nuntiorum '
  'Latinorum Radiophoniae Bremensis nuntios septimanales lingua Latina '
  'emittere coeperunt. Qui post septem fere annos hoc nuntio finiuntur. Nuntii '
  'autem singulorum mensium etiam in futurum emittentur ut solent. Cuncti '
  'nuntii septimanales in archivo repositi sunt ita, ut legi et audiri '
  'possint.']]

The stories were preprocessed following more or less the same process that I’ve used in earlier posts. One exception was that I need to tweak the CLTK Latin tokenizer. This tokenizer currently checks tokens against a list of high-frequency forms ending in ‘-ne‘ and ‘-n‘ to best predict when the enclitic –ne should be assigned its own token. Nuntii Latini unsurpisingly contains a number of words not on this list—mostly proper names ending in ‘-n‘, such as Clinton, Putin, Erdoğan, John and Bremen among others.

Here are some basic stats about the Nuntii Latini 2016:

Number of weekly nuntii: 46 (There was a break over the summer.)
Number of stories: 138
Number of tokens: 6546
Number of unique tokens: 3021
Lexical diversity: 46.15% (i.e. unique tokens / tokens)
Number of unique lemmas: 2033
Here are the top tokens:
Top 10 tokens in Nuntii Latini 2016:

       TOKEN       COUNT       Type-Tok %  RUNNING %   
    1. in          206         3.15%       3.15%       
    2. est         135         2.06%       5.21%       
    3. et          106         1.62%       6.83%       
    4. qui         70          1.07%       7.9%        
    5. ut          56          0.86%       8.75%       
    6. a           54          0.82%       9.58%       
    7. sunt        50          0.76%       10.34%      
    8. esse        42          0.64%       10.98%      
    9. quod        41          0.63%       11.61%      
   10. ad          40          0.61%       12.22%      

How does this compare with the top tokens from the Latin Library that I posted earlier in the week? Usual suspects overall. Curious that the Nuntii uses -que relatively infrequently and even et less than we would expect compared to a larger sample like the Latin Library. There seems to be a slight preference for a (#6) over ab (#27). [Similar pattern is e (#21) vs. ex (#25).] And three forms of the verb sum crack the Top 10—an interesting feature of the Nuntii Latini style.

The top lemmas are more interesting:

Top 10 lemmas in Nuntii Latini 2016:

       LEMMA       COUNT       TYPE-LEM %  RUNNING %   
    1. sum         323         4.93%       4.93%       
    2. qui         208         3.18%       8.11%       
    3. in          206         3.15%       11.26%      
    4. et          106         1.62%       12.88%      
    5. annus       91          1.39%       14.27%      
    6. ab          74          1.13%       15.4%       
    7. hic         64          0.98%       16.38%      
    8. ut          56          0.86%       17.23%      
    9. ille        51          0.78%       18.01%      
   10. homo        49          0.75%       18.76%

Based on the top tokens, it is no surprise to see sum take the top spot. At the same time, we should note that this is a good indicator of Nuntii Latini style. Of greater interest though, unlike the Latin Library lemma list, we see content words appearing with greater frequency. Annus is easily explained by the regular occurrence of dates in the news stories, especially formulas for the current year such as anno bis millesimo sexto decimo. Homo on the other hand tells us more about the content and style of the Nuntii. Simply put, the news stories concern the people of the world and in the abbreviated style of the Nuntii, homo and often homines is a useful and general way of referring to them, e.g. Franciscus papa…profugos ibi permanentes et homines ibi viventes salutavit from April 22.

Since I had the Top 10,000 Latin Library tokens at the ready, I thought it would be interesting to “subtract” these tokens from the Nuntii list to see what remains. This would give a (very) rough indication of which words represent the 2016 news cycle more than Latin usage in general. So, here are the top 25 tokens from the Nuntii Latini that do not appear in the Latin Library list:

Top 25 tokens in Nuntii Latini 2016 (not in the Latin Library 10000):

       LEMMA               COUNT       
    1. praesidens          19          
    2. turciae             17          
    3. ministrorum         14          
    4. americae            13          
    5. millesimo           13          
    6. moderatores         12          
    7. unitarum            12          
    8. electionibus        10          
    9. factio              9           
   10. merkel              8           
   11. factionis           8           
   12. imprimis            8           
   13. habitis             8           
   14. europaeae           8           
   15. millesimi           8           
   16. turcia              7           
   17. britanniae          7           
   18. cancellaria         7           
   19. angela              7           
   20. declarauit          7           
   21. recep               7           
   22. democrata           7           
   23. profugis            7           
   24. tayyip              7           
   25. suffragiorum        6

As I said above, this is a rough, inexact way of weighting the vocabulary. At the same time, it does give a good sense of the year in (Latin) world news. We see important regions in world politics (Europe, Turkey, America, Britain), major players (Angela Merkel, Recep Tayyip [Erdoğan]), and their titles (praesidens, minister, moderator). There are indicators of top news stories like the elections (electio, factio, suffragium, democrata) in the U.S and elsewhere as well as the refugee crisis (profugus). Now that I have this dataset, I’d like to use it to look for patterns in the texts more systematically, e.g. compute TF-IDF scores, topic model the stories, extract named entities, etc. Look for these posts in upcoming weeks.

10,000 Most Frequent ‘Words’ in the Latin Library

article

A few months ago, I posted a list of the 10,000 most frequent words in the PHI Classical Latin Texts. While I did include a notebook with the code for that experiment, I could not include the data because the PHI texts are not available for redistribution. So here is an updated post, based on a freely available corpus of Latin literature—and one that I have been using for my recent Disiecta Membra posts like this one and this one and this one—the Latin Library. (The timing is good, as the Latin Library has received some positive attention recently.) The code for this post is available as a Jupyter Notebook here.

The results, based on the 13,563,476 tokens in the Latin Library:

Top 10 tokens in Latin Library:

       TOKEN       COUNT       TYPE-TOK %  RUNNING %   
    1. et          446474      3.29%       3.29%       
    2. in          274387      2.02%       5.31%       
    3. est         174413      1.29%       6.6%        
    4. non         166083      1.22%       7.83%       
    5. -que        135281      1.0%        8.82%       
    6. ad          133596      0.98%       9.81%       
    7. ut          119504      0.88%       10.69%      
    8. cum         109996      0.81%       11.5%       
    9. quod        104315      0.77%       12.27%      
   10. si          95511       0.70%       12.97%

How does this compare with the previous test against the PHI run? Here are the frequency rankings from the PHI run, 1 through 10: et, in, -que, ne, est, non, ut, cum, si, and ad. So—basically, the same. The loss of ne from the top 10 is certainly a result of improvements to the CLTK tokenizer, specifically improvements in tokenizing the the enclitic -ne. Ne is now #41 with 26,825 appearances and -ne #30 with 36,644 appearances. The combined count would still not crack the Top 10, which suggests that there may have been a lot of words wrongly tokenized of the form, e.g. ‘homine’ as [‘homi’, ‘-ne’]. (I suspect that this still happens, but am confident that the frequency of this problem is declining. If you spot any “bad” tokenization involving words ending in ‘-ne‘ or ‘-n‘, please submit an issue.) With ne out of the Top 10, we see that quod has joined the list. It should come as little surprise that quod was #11 in the PHI frequency list.

Since the PHI post, significant advances have been made with the CLTK Latin lemmatizer. Recent tests show accuracies consistently over 90%. So, let’s put out a provisional list of top lemmas as well—

Top 10 lemmas in Latin Library:

       LEMMA       COUNT       TYPE-LEM %  RUNNING %   
    1. et          446474      3.29%       3.29%       
    2. sum         437415      3.22%       6.52%       
    3. qui         365280      2.69%       9.21%       
    4. in          274387      2.02%       11.23%      
    5. is          213677      1.58%       12.81%      
    6. non         166083      1.22%       14.03%      
    7. -que        144790      1.07%       15.1%       
    8. hic         140421      1.04%       16.14%      
    9. ad          133613      0.99%       17.12%      
   10. ut          119506      0.88%       18.0%

No real surprises here. Six from the Top 10 lemmas are indeclinable, whether conjunctions, prepositions, adverbs, or enclitic, and so remain from the top tokens list: etinnon-quead and ut. Forms of sum and qui can be found in the top tokens list as well, est and quod respectively. Hic rises to the top based on its large number of relatively high ranking forms, though it should be noted that its top ranking form is #23 (hoc), followed by #46 (haec), #71 (his), #91 (hic), and #172 (hanc) among others. Is also joins the top 10, though I have my concerns about this because of the relatively high frequency of overlapping forms with the verb eo (i.e. eoiseam, etc.). This result should be reviewed and tested further.

While I’m thinking about it, other concerns I have would be the counts for hic, i.e. with respect to the demonstrative and the adverb, as well as the slight fluctuations in the counts of indeclinables, e.g. ut (119,504 tokens vs. 119,506 lemmas), or the somewhat harder to explain jump in -que. So, we’ll consider this a work in progress. But one that is—at least for the Top 10—more or less in line with other studies (e.g. Diederich, which—with the exception of cum—has same words, if different order.)

 

10,000 Most Frequent ‘Words’ in the Latin Canon, revisited

code

Last year, the CLTK’s Kyle Johnson wrote a post on the “10,000 most frequent words in Greek and Latin canon”. Since that post was written, I updated the CLTK’s Latin tokenizer to better handle enclitics and other affixes. I thought it would be a good idea to revisit that post for two reasons: 1. to look at the most important changes introduced by the new tokenizer features, and 2. to discuss briefly what we can learn from the most frequent words as I continue to develop the new Latin lemmatizer for the CLTK.

Here is an iPython notebook with the code for generating the Latin list: https://github.com/diyclassics/lemmatizer/blob/master/notebooks/phi-10000.ipynb. I have followed Johnson’s workflow, i.e. tokenize the PHI corpus and create a frequency distribution list. (In a future post, I will run the same experiment on the Latin Library corpus using the built-in NLTK FreqDist function.)

Here are the results:

Top 10 tokens using the NLTK tokenizer:
et	197240
in	141628
est	99525
non	91073
ut	70782
cum	61861
si	60652
ad	59462
quod	53346
qui	46724
Top 10 tokens using the CLTK tokenizer:
et	197242
in	142130
que	110612
ne	103342
est	103254
non	91073
ut	71275
cum	65341
si	61776
ad	59475

The list gives a good indication of what the new tokenizer does:

  • The biggest change is that the (very common) enclitics -que and -ne take their place in the list of top Latin tokens.
  • The words et and non (words which do not combine with -que) are for the most part unaffected.
  • The words estin, and ut see their count go up because of enclitic handling in the Latin tokenizer, e.g. estne > est, ne; inque > in, que. While these tokens are the most obvious examples of this effect, it is the explanation for most of the changed counts on the top 10,000 list, e.g. amorque amor, que. (Ad is less clear. Adque may be a variant of atque; this should be looked into.)
  • The word cum also see its count go up, both because of enclitic handling and also because of the tokenization of forms like mecum as cumme.
  • The word si sees its count go up because the Latin tokenizer handles contractions if words like sodes (siaudes) and sultis (sivultis).

I was thinking about this list of top tokens as I worked on the Latin lemmatizer this week. These top 10 tokens represent 17.3% of all the tokens in the PHI corpus; related, the top 228 tokens represent 50% of the corpus. Making sure that these words are handled correctly then will have the largest overall effect on the accuracy of the Latin lemmatizer.

A few observations…

  • Many of the highest frequency words in the corpus are conjunctions, prepositions, adverbs and other indeclinable, unambiguous words. These should be lemmatized with dictionary matching.
  • Ambiguous tokens are the real challenge of the lemmatizer project and none is more important than cumCum alone makes up 1.1% of the corpus with both the conjunction (‘when’) and the preposition (‘with’) significantly represented. Compare this with est, which is an ambiguous form (i.e. est sum “to be” vs. est edo “to eat”), but with one occurring by far more frequently in the corpus. For this reason, cum will be a good place to start with testing a context-based lemmatizer, such as one that uses bigrams to resolve ambiguities. Quod and quam, also both in the top 20 tokens, can be added to this category.

In addition to high-frequency tokens, extremely rare tokens also present a significant challenge to lemmatization. Look for a post about hapax legomena in the Latin corpus later this week.

More Tokenizing Latin Text

code

When I first started working on the CLTK Latin tokenizer, I wrote a blog post both explaining tokenizing in general and also showing some of the advantages of using a language-specific tokenizer. At that point, the most important feature of the CLTK Latin tokenizer was the ability to split tokens on the enclitic ‘-que’. In the meantime, I have added several more features, described below. Like the last post, the code below assumes the following requirements: Python 3.4, NLTK3, and the current version of CLTK.

Start by importing the Latin word tokenizer with the following code:

from cltk.tokenize.word import WordTokenizer
word_tokenizer = WordTokenizer('latin')

The following code demonstrates the current features of the tokenizer:

# -que
# V. Aen. 1.1
text = "Arma virumque cano, Troiae qui primus ab oris"
word_tokenizer.tokenize(text)

>>> ['Arma', 'que', 'virum', 'cano', ',', 'Troiae', 'qui', 'primus', 'ab', 'oris']


# -ne
# Cic. Orat. 1.226.1
text = "Potestne virtus, Crasse, servire istis auctoribus, quorum tu praecepta oratoris facultate complecteris?"
word_tokenizer.tokenize(text)

>>> ['ne', 'Potest', 'virtus', ',', 'Crasse', ',', 'servire', 'istis', 'auctoribus', ',', 'quorum', 'tu', 'praecepta', 'oratoris', 'facultate', 'complecteris', '?']


# -ve
# Catull. 14.4-5
text = "Nam quid feci ego quidve sum locutus, cur me tot male perderes poetis?"
word_tokenizer.tokenize(text)

>>> ['Nam', 'quid', 'feci', 'ego', 've', 'quid', 'sum', 'locutus', ',', 'cur', 'me', 'tot', 'male', 'perderes', 'poetis', '?']


# -'st' contractions
# Prop. 2.5.1-2

text = "Hoc verumst, tota te ferri, Cynthia, Roma, et non ignota vivere nequitia?"

word_tokenizer.tokenize(text)

>>> ['Hoc', 'verum', 'est', ',', 'tota', 'te', 'ferri', ',', 'Cynthia', ',', 'Roma', ',', 'et', 'non', 'ignota', 'vivere', 'nequitia', '?']

# Plaut. Capt. 937
text = "Quid opust verbis? lingua nullast qua negem quidquid roges."

word_tokenizer.tokenize(text)

>>> ['Quid', 'opus', 'est', 'verbis', '?', 'lingua', 'nulla', 'est', 'qua', 'negem', 'quidquid', 'roges.']


# 'nec' and 'neque'
# Cic. Phillip. 13.14

text = "Neque enim, quod quisque potest, id ei licet, nec, si non obstatur, propterea etiam permittitur."

word_tokenizer.tokenize(text)

>>> ['que', 'Ne', 'enim', ',', 'quod', 'quisque', 'potest', ',', 'id', 'ei', 'licet', ',', 'c', 'ne', ',', 'si', 'non', 'obstatur', ',', 'propterea', 'etiam', 'permittitur.']


# '-n' for '-ne'
# Plaut. Amph. 823

text = "Cenavin ego heri in navi in portu Persico?"

word_tokenizer.tokenize(text)

>>> ['Cenavi', 'ne', 'ego', 'heri', 'in', 'navi', 'in', 'portu', 'Persico', '?']


# Contractions with 'si'; also handles 'sultis',
# Plaut. Bacch. 837-38

text = "Dic sodes mihi, bellan videtur specie mulier?"

word_tokenizer.tokenize(text)

>>> ['Dic', 'si', 'audes', 'mihi', ',', 'bella', 'ne', 'videtur', 'specie', 'mulier', '?']


There are still improvements to be done, but this handles a high percentage of Latin tokenization tasks. If you have any ideas for more cases that need to be handled or if you see any errors, let me know.

Tokenizing Latin Text

code

One of the first tasks necessary in any text analysis projects is tokenization—we take our text as a whole and convert it to a list of smaller units, or tokens. When dealing with Latin—or at least digitized version of modern editions, like those found in the Perseus Digital Library, the Latin Library, etc.—paragraph- and sentence-level tokenization present little problem. Paragraphs are usually well marked and can be split by newlines (</n>). Sentences in modern Latin editions use the same punctuation set as English (i.e., ‘.’, ‘?’, and ‘!’), so most sentence-level tokenization can be done more or less successfully with the built-in tools found in the Natural Language Toolkit (NLTK), e.g. nltk.word_tokenize. But just as in English, Latin word tokenization offers small, specific issues that are not addressed by NLTK. The classic case in English is the negative contraction—how do we want to handle, for example, “didn’t”: [“didn’t”] or [“did”, “n’t”] or [“did”, “not”]?

There are four important cases in which Latin word tokenization demands special attention: the enclictics “-que”, “-ue/-ve”, and “-ne” and the postpositive use of “-cum” with the personal pronouns (e.g. nobiscum for *cum nobis). The Classical Language Toolkit now takes these cases into consideration when doing Latin word tokenization. Below is a brief how-to on using the CLTK to tokenize your Latin texts by word. [The tutorial assumes the following requirements: Python3, NLTK3, CLTK.]

Tokenizing Latin Text with CLTK

We could simply use Python to split our texts into a list of tokens. (And sometimes this will be enough!) So…

text = "Arma virumque cano, Troiae qui primus ab oris"
text.split()

>>> ['Arma', 'virumque', 'cano,', 'Troiae', 'qui', 'primus', 'ab', 'oris']

A good start, but we’ve lost information, namely the comma between cano and Troiae. This might be ok, but let’s use NLTK’s tokenizer to hold on to the punctuation.

import nltk

nltk.word_tokenize(text)

>>> ['Arma', 'virumque', 'cano', ',', 'Troiae', 'qui', 'primus', 'ab', 'oris']

Using word_tokenize, we retain the punctuation. But otherwise we have more or less the same division of words.

But for someone working with Latin that second token is an issue. Do we really want virumque? Or are we looking for virum and the enclitic –que? In many cases, it will be there latter. Let’s use CLTK to handle this. (***UPDATED 3.19.16***)

from cltk.tokenize.word import WordTokenizer

word_tokenizer = WordTokenizer('latin')
word_tokenizer.tokenize(text)

>>> ['Arma', 'que', 'virum', 'cano', ',', 'Troiae', 'qui', 'primus', 'ab', 'oris']

 

Using the CLTK WordTokenizer for Latin we retain the punctuation and split the special case more usefully.