In an earlier post, I explained how to import the contents of The Latin Library as a plaintext corpus for you to use with the Classical Language Toolkit. In this post, I want to show you a quick and easy way to access this corpus (or parts of this corpus).
[This post assumes that you have already imported the Latin Library corpus as described in the earlier post and as always that you are running the latest version of CLTK on Python3. This tutorial was tested on v. 0.1.41. In addition, if you imported the Latin Library corpus in the past, I recommend that you delete and reimport the corpus as I have fixed the encoding of the plaintext files so that they are all UTF-8.]
With the corpus imported, you can access it with the following command:
from cltk.corpus.latin import latinlibrary
If we check the type, we see that our imported latinlibrary is an instance of the PlaintextCorpusReader of the Natural Language Toolkit:
print(type(latinlibrary)) >>> <class 'nltk.corpus.reader.plaintext.PlaintextCorpusReader'>
Now we have access to several useful PlaintextCorpus Reader functions that we can use to explore the corpus. Let’s look at working with the Latin Library as raw data (i.e. a very long string), a list of sentences, and a list of words.
ll_raw = latinlibrary.raw() print(type(ll_raw)) >>> <class 'str'> print(len(ll_raw)) >>> 96167304 print(ll_raw[91750273:91750318]) >>> Arma virumque cano, Troiae qui primus ab oris
The “raw” function returns the entire text of the corpus as a string. So with a few Python string operations, we can learn the size of the Latin Library (96,167,304 characters!) and we can do other things like print slices from the string.
PlaintextCorpusReader can also return our corpus as sentences or as words:
ll_sents = latinlibrary.sents() ll_words = latinlibrary.words()
Both of these are returned as instances of the class ‘nltk.corpus.reader.util.ConcatenatedCorpusView’, and we can work with them either directly or indirectly. (Note that this is a very large corpus and some of the commands—rest assured, I’ve marked them—will take a long time to run. In an upcoming post, I will both discuss strategies for iterating over these collections more efficiently as well as for avoiding having to wait for these results over and over again.)
# Get the total number of words (***slow***): ll_wordcount = len(latinlibrary.words()) print(ll_wordcount) >>> 16667761 # Print a slice from 'words' from the concatenated view: print(latinlibrary.words()[:100]) >>> ['DUODECIM', 'TABULARUM', 'LEGES', 'DUODECIM', ...] # Return a complete list of words (***slow***): ll_wordlist = list(latinlibrary.words()) # Print a slice from 'words' from the list: print(ll_wordlist[:10]) >>> ['DUODECIM', 'TABULARUM', 'LEGES', 'DUODECIM', 'TABULARUM', 'LEGES', 'TABULA', 'I', 'Si', 'in'] # Check for list membership: test_words = ['est', 'Caesar', 'lingua', 'language', 'Library', '101', 'CI'] for word in test_words: if word in ll_wordlist: print('\'%s\' is in the Latin Library' %word) else: print('\'%s\' is *NOT* in the Latin Library' %word) >>> 'est' is in the Latin Library >>> 'Caesar' is in the Latin Library >>> 'lingua' is in the Latin Library >>> 'language' is *NOT* in the Latin Library >>> 'Library' is in the Latin Library >>> '101' is in the Latin Library >>> 'CI' is in the Latin Library # Find the most commonly occurring words in the list: from collections import Counter c = Counter(ll_wordlist) print(c.most_common(10)) >>> [(',', 1371826), ('.', 764528), ('et', 428067), ('in', 265304), ('est', 171439), (';', 167311), ('non', 156395), ('-que', 135667), (':', 131200), ('ad', 127820)]
There are 16,667,542 words in the Latin Library. Well, this is not strictly true—for one thing, the Latin word tokenizer isolates punctuation and numbers. In addition, it is worth pointing out that the plaintext Latin Library include the English header and footer information from each page. (This explains why the word “Library” tests positive for membership.) So while we don’t really have 16+ million Latin words, what we do have is a large list of tokens from a large Latin corpus. And now that we have this large list, we can “clean it up” depending on what research questions we want to ask. So, even though it is slow to create a list from the Concatenated CorpusView, once we have that list, we can perform any list operation and do so much more quickly. Remove punctuation, normalize case, remove stop words, etc. I will leave it to you to experiment with this kind of preprocessing on your own for now. (Although all of these steps will be covered in future posts.)
Much of the time, we will not want to work with the entire corpus but rather with subsets of the corpus such as the plaintext files of a single author or work. Luckily, PlaintextCorpusReader allows us to load multi-file corpora by file. In the next post, we will look at loading and working with smaller selections of the Latin Library.