{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# This is to have markdown tables left aligned\n", "# It can be ignored\n", "from IPython.core.display import HTML\n", "table_css = 'table {align:left;display:block} '\n", "HTML(''.format(table_css))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# DIT -- Computing Learning\n", "\n", "This notebook is part of the materials for three 2023/2024 lessons:\n", "\n", "| Code | Lesson |\n", "| :--- | :----------- |\n", "| B2696 | Computational Thinking |\n", "| 99797 | Advanced Professional Skills |\n", "| B3520 | Profession-based Research |\n", "\n", "It has also been used for the PhD seminars on python offered to the PhD students in 2022, 2023, and 2024 " ] }, { "cell_type": "markdown", "metadata": { "id": "Ik_5kKpgmgJ2" }, "source": [ "# Python for Poets, 2nd part" ] }, { "cell_type": "markdown", "metadata": { "id": "_akqWAD8mgJ8" }, "source": [ "This is the second part of the Python for Poets module (still inspired by Keneth W. Church's [Unix for Poets](https://www.cs.upc.edu/~padro/Unixforpoets.pdf)). From that chapter itself:\n", "\n", "The code has been developed using Python 3.6. It has been written using a local instance of [jupyter](https://jupyter.org/). All snippets could be run in any machine with Python 3.6 (or higher) installed, or online, as a Jupyter notebook.\n", "\n", "Note that many of these exercises would be indeed simpler using simple one-liners on the Unix/Linux command line!" ] }, { "cell_type": "markdown", "metadata": { "id": "GoT__6UjmgL_" }, "source": [ "## 4. Methods" ] }, { "cell_type": "markdown", "metadata": { "id": "I_QAoE-5mgMA" }, "source": [ "Once again, **from Unix for poets**\n", "\n", "Suppose that you found that you were often computing trigrams of different things, and you found it\n", "inconvenient to keep typing the same five lines over and over. If you put the following **method**, then you could count n-grams with one single line." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import re\n", "from collections import Counter \n", "\n", "# do not forget to put/upload the txt file before \n", "\n", "with open(\"genesis.txt\", 'r') as input:\n", " txt = input.read()\n", "\n", "# Apply a regex to string txt and look for all occurrences of the given pattern\n", "tokens = re.findall('[A-Za-z]+', txt)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "a8-1mTnLmgMC" }, "outputs": [], "source": [ "def ngrams(tokens, n):\n", " return [\" \".join(tokens[i:i+n]) for i in range(len(tokens)-n+1)]\n", "\n", "four_grams = ngrams(tokens, 6)\n", "c = Counter(four_grams)\n", "print(c)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A method is declared with the reserved word **def**. The signature of a method consists of its name, its arguments, and its return type (the latter is optional in python). The signature of our method (aka function) is as follows\n", "\n", "```\n", "def ngrams(tokens, n):\n", "```\n", "where **ngrams** is the name of the method, **tokens** is the first argument and **n** is the second argument. \n", "\n", "If the method is called as ```ngrams(tok)```, it will trigger an error because it does not stick to the signature. Different from other languages, such as [Java](https://www.thoughtco.com/method-signature-2034235), in python there is no obbligation to define the type of the arguments. Hence, it is a good idea to [document](https://www.python.org/dev/peps/pep-0257/) the method:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def ngrams(tokens, n):\n", " \"\"\"Extracting the word n-grams from a list of words.\n", " \n", " Keyword arguments:\n", " tokens -- a list of strings\n", " n -- a positive integer\n", " \n", " \"\"\"\n", " if type(n) != int or n < 0 or n > len(tokens):\n", " print(\"n has to be a positive integer with max value = len(tokens). I got\", n, \"instead\")\n", " return [\" \".join(tokens[i:i+n]) for i in range(len(tokens)-n+1)]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The previous method also shows an example of **defensive programming**." ] }, { "cell_type": "markdown", "metadata": { "id": "0qCxJTJ6mgMG" }, "source": [ "## 5. Counting _n_-grams from verses containing the phrase \"the land of\"" ] }, { "cell_type": "markdown", "metadata": { "id": "UNATLDmZmgMH" }, "source": [ "The most frequent _3_-gram in Genesis is \"the land of\". Let us count the 3-grams in verses containing \"the land of\" only. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "VBw4TANEmgMJ" }, "outputs": [], "source": [ "with open(\"genesis.txt\", 'r') as input:\n", " txt = \" \".join([x for x in input.readlines() if \"the land of\" in x]) \n", "tokens = re.findall('[A-Za-z]+', txt)\n", "three_grams = ngrams(tokens, 3)\n", "c = Counter(three_grams)\n", "print(c)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true, "id": "4zbbOPxtmgMM" }, "source": [ "Let us count the 3-grams in verses **not** containing \"the land of\". " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "hyfgX5d2N3EP" }, "outputs": [], "source": [ "with open(\"genesis.txt\", 'r') as input:\n", " txt = \" \".join([x for x in input.readlines() if \"the land of\" not in x]) \n", "tokens = re.findall('[A-Za-z]+', txt)\n", "three_grams = ngrams(tokens, 3)\n", "c = Counter(three_grams)\n", "print(c)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Ognf_igNOxIN" }, "outputs": [], "source": [ "None" ] }, { "cell_type": "markdown", "metadata": { "id": "169Zj4xxmgMU" }, "source": [ "## Take-home exercises\n", "1. How many uppercase tokens are in this version of Genesis?\n", "2. How many 4-letter words? Hint: look at built-in function ```len()```\n", "3. Are there words without vowels?\n" ] }, { "cell_type": "markdown", "metadata": { "id": "JCp3cV0gmgMV" }, "source": [ "## 6. Some more string operations\n", "\n", "Get the _k_ most common _n_-grams" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "RawhEYR5mgMW" }, "outputs": [], "source": [ "k = 1\n", "with open(\"genesis.txt\", 'r') as input:\n", " txt = \" \".join([x for x in input.readlines() if \"the land of\" in x])\n", "tokens = re.findall('[A-Za-z]+', txt)\n", "three_grams = ngrams(tokens, 1)\n", "c = Counter(three_grams)\n", "c.most_common(k)\n" ] }, { "cell_type": "markdown", "metadata": { "id": "76_jy3NNmgMb" }, "source": [ "Get all n-grams appearing only **k times**. An then let's try to find out the longest _n_-grams appearing at least 5 times!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "qmGjLC7YmgMd" }, "outputs": [], "source": [ "n = 5\n", "k = 5\n", "\n", "with open(\"genesis.txt\", 'r') as input:\n", " txt = input.read()\n", "\n", "# Apply a regular expression to the string txt and look for all occurrences of the given pattern\n", "tokens = re.findall('[A-Za-z]+', txt)\n", "# NOTE THIS HORRIBLE THING I DID HERE!!!\n", "ngrams = ngrams(tokens, n)\n", "\n", "c = Counter(ngrams)\n", "my_ngrams = [x for x in c if c[x]==k]\n", "print(my_ngrams)" ] }, { "cell_type": "markdown", "metadata": { "id": "iZ0TnSmNmgMh" }, "source": [ "#### Find palyndroms in Genesis\n", "\n", "We will use the comparator **==** to find out whether a statement is true or false" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "8nus2VuKGhGR" }, "outputs": [], "source": [ "for token in tokens:\n", " if token[::-1]==token and len(token)>1:\n", " print(token)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "How to store the palyndromes rather than just displaying them?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "iWMimL5EmgMi" }, "outputs": [], "source": [ "palyndromes = []\n", "None \n", "c = Counter(palyndromes)\n", "print(c)" ] }, { "cell_type": "markdown", "metadata": { "id": "04Zu2txemgMm" }, "source": [ "#### Exercises\n", "\n", "(from Church)\n", "1. It is said that English avoids sequences of _-ing_ words. Find bigrams where both words end in _-ing_. Do these count as counter-exampes of the _-ing -ing_ rule?\n", "2. For comparison's sake, find bigrams where both words end in _-ed_. Should there also be a prohibition against _-ed -ed_? Are there any examples of _-ed -ed_ in Genesis? If so, how many? Which verse(s)?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "3xrhywGUmgMn" }, "outputs": [], "source": [ "# OPTION 1. Using a regular expression over n-grams\n", "\n", "grams = ngrams(tokens, 2)\n", "regexp = re.compile('.+ing .+ing$')\n", "ing_ing = [x for x in grams if regexp.match(x)]\n", "print(ing_ing)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "0Q2qFmLfmgMq" }, "outputs": [], "source": [ "# OPTION 2. Using string operations\n", "grams = ngrams(tokens, 2)\n", "ing_ing = []\n", "for gram in grams:\n", " pair = gram.split(\" \")\n", " if pair[0].endswith(\"ing\") and pair[1].endswith(\"ing\"):\n", " ing_ing.append(gram)\n", "print(ing_ing)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "wYiQUXP2mgMu" }, "outputs": [], "source": [ "# Do it for \"-ed -ed\" here\n", "None" ] }, { "cell_type": "markdown", "metadata": { "id": "YKq738TzmgMz" }, "source": [ "#### Exercise\n", "\n", "Print out verses containing the phrase \"Let there be light\". Print out the previous verse as well" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "1moDQk5AmgM0" }, "outputs": [], "source": [ "my_str = \"Let there be light\"\n", "\n", "with open(\"genesis.txt\", 'r') as input:\n", " verses = input.readlines()\n", " \n", "for verse in verses:\n", " if my_str in verse:\n", " print(verse)" ] }, { "cell_type": "markdown", "metadata": { "id": "2QPn6nrVmgM3" }, "source": [ "Now we have the two verses. **How can we print the previous verse as well???**" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "b0ote36AmgM4" }, "outputs": [], "source": [ "# Change the previous code snippet to print the previous sentence as well\n", "None\n" ] }, { "cell_type": "markdown", "metadata": { "id": "xodcm3TAmgM7" }, "source": [ "## 7. String substitutions" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "NE7Pkml3mgM9" }, "outputs": [], "source": [ "def top_k_lines(file, k=10):\n", " with open(file) as f:\n", " head = [next(f) for x in range(k)]\n", " return head\n", "\n", "txt = top_k_lines(\"genesis.txt\")\n", "# print(txt)\n", "for line in txt:\n", " print(line.replace(\"God\", \"The Spaghetti Monster\").strip())" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "0Dtcl0tfJuJt" }, "outputs": [], "source": [ " with open(\"genesis.txt\") as f:\n", " print(next(f))\n", " print(next(f))\n", " print(next(f))\n", " print(next(f))\n", " print(next(f))\n", " " ] }, { "cell_type": "markdown", "metadata": { "id": "l4IuZEkHmgNA" }, "source": [ "## 8. Mutual information to find collocations\n", "\n", "From the Wikipedia articles on [mutual information](https://en.wikipedia.org/wiki/Mutual_information#Applications_2) and [collocations](https://en.wikipedia.org/wiki/Collocation):\n", "\n", "In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the **mutual dependence between the two variables**. More specifically, it quantifies the \"amount of information\" (in units such as shannons, commonly called bits) obtained about one random variable through observing the other random variable.\n", "\n", "Mutual information of words is often used as a **significance function for the computation of collocations in corpus**\n", "\n", "A collocation is a series of words or terms that co-occur **more often than would be expected by chance**.\n", "Mutiual information is defined as\n", "\n", "$MI(x,y) = log_2 \\frac{Pr(x,y)}{Pr(x) Pr(y)}$\n", "\n", "and, following [Magerman and Marcus](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.78.4178&rep=rep1&type=pdf), in NLP it can be estimated as \n", "\n", "$MI(x,y) \\approx log \\frac{\\frac{f(x,y)}{\\sum_{(i,j)\\in C}f(i,j)}}{\\frac{f(x)}{\\sum_{i\\in C}{f(i)}} \\frac{f{y}}{\\sum_{i\\in C}f(i)} }$\n", "\n", "where $\\sum_{\\cdot}$ is the sum over all instances of $\\cdot$" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 69 }, "id": "lQo92EhxmgNC", "outputId": "05932ff7-ee10-4637-8561-11c44a92dd30" }, "outputs": [], "source": [ "from math import log\n", "\n", "bigrams = ngrams(tokens, 2)\n", "unigrams = ngrams(tokens, 1)\n", "\n", "freq_bigrams = Counter(bigrams)\n", "freq_unigrams = Counter(unigrams)\n", "\n", "sum_bigrams = sum(freq_bigrams.values())\n", "sum_unigrams = sum(freq_unigrams.values())\n", "\n", "print(sum_bigrams, sum_unigrams)\n", "#freqs_x_y = grams = \n", "\n", "my_str = [\"God\", \"created\"]\n", "my_str = [\"of\", \"Esau\"]\n", "# my_str = [\"LORD\", \"said\"]\n", "\n", "x = (freq_bigrams[\" \".join(my_str)] / sum_bigrams)\n", "y = (freq_unigrams[my_str[0]] / sum_unigrams)\n", "z = (freq_unigrams[my_str[1]] / sum_unigrams) \n", "\n", "mi = log((freq_bigrams[\" \".join(my_str)] / sum_bigrams) / \n", " ( (freq_unigrams[my_str[0]] / sum_unigrams) * (freq_unigrams[my_str[1]] / sum_unigrams) ) )\n", "\n", "mi2 = log(x/y*z)\n", "\n", "mi2 = log(x/(y*z))\n", "\n", "print(\"mi(%s, %s) = %f\" % (my_str[0], my_str[1], mi))\n", "print(mi2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 54 }, "id": "4Y5JHQBjNgRJ", "outputId": "1ede6cf8-6237-473f-8d80-735102c54f81" }, "outputs": [], "source": [ "print(freq_bigrams.values())" ] }, { "cell_type": "markdown", "metadata": { "id": "IFwa840amgNB" }, "source": [ "From Chuch. \"The problem is to input a text file, say Genesis (a good place to start), and output a list of words in the file along with their frequency counts. The algorithm consists of three steps:\"\n", "\n", "1. Tokenize the text into a sequence of words (_re_),\n", "2. Sort the words, and\n", "3. Count duplicates (with a _dictionary_ or with _Counter_)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "SqEJTjJrmgNH", "outputId": "2d9b3740-6d34-4cc7-97a6-09d9ec0d4848" }, "outputs": [], "source": [ "from math import log\n", "\n", "bigrams = ngrams(tokens, 2)\n", "unigrams = ngrams(tokens, 1)\n", "\n", "freq_bigrams = Counter(bigrams)\n", "freq_unigrams = Counter(unigrams)\n", "\n", "sum_bigrams = sum(freq_bigrams.values())\n", "sum_unigrams = sum(freq_unigrams.values())\n", "\n", "print(sum_bigrams, sum_unigrams)\n", "\n", "for k in freq_bigrams:\n", " freq_bigrams[k] /= sum_bigrams\n", " \n", "for k in freq_unigrams:\n", " freq_unigrams[k] /= sum_unigrams\n", "\n", "#freqs_x_y = grams = \n", "\n", "my_str = [\"God\", \"created\"]\n", "my_str = [\"of\", \"Esau\"]\n", "my_str = [\"LORD\", \"said\"]\n", "\n", "mi = log((freq_bigrams[\" \".join(my_str)]) / \n", " ( (freq_unigrams[my_str[0]]) * (freq_unigrams[my_str[1]]) ) )\n", "\n", "print(\"mi(%s, %s) = %f\" % (my_str[0], my_str[1], mi))" ] }, { "cell_type": "markdown", "metadata": { "id": "kb5Yf6JdmgNL" }, "source": [ "### Take-Home Exercises (once again, from Church)\n", "\n", "1. Compute the $MI(x,y) \\forall (x,y) \\in C$ (Compute $MI(x,y)$ for all pair in the corpus\n", "2. MI is unstable for small bigram counts. Compute (or display) MI only for those bigrams x such that $f(x)\\geq 5$.\n", "3. Find the 10 bigrams in Genesis with the largest MI.\n" ] }, { "cell_type": "markdown", "metadata": { "id": "pLQ9vsNymgNL" }, "source": [ "## 9. Make a Concordance" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Aezo1K2hmgNM" }, "outputs": [], "source": [] } ], "metadata": { "colab": { "collapsed_sections": [ "iKtW0ep_Dl6f", "kb5Yf6JdmgNL" ], "name": "Python4Poets_static.ipynb", "provenance": [], "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.20" } }, "nbformat": 4, "nbformat_minor": 1 }