Saturday, October 25, 2025

Neural Networks Avoid the Bias Variance Tradeoff Via the Double Dip

Introduction

This blog post covers the topic of how neural networks avoid the bias-variance tradeoff via the double dip. It is of interest because it bridges the classical data science technqiues and neural networks. This way we can get beyond the silliness of classical data science being old and not useful while neural networks are new and useful.

The blog post title is a one line representation of the YouTube video "What the Books Get Wrong about AI [Double Descent]" at Welch Labs. In other words, this blog post is a summary of the material in the YouTube video.

The goal of the blog post is to introduce and demystify this complex topic for a broad audience.

Classic Data Science Talks About the Bias Variance Tradeoff

A classic book in classic data science like Elements of Statistical Learning (ESL) by Trevor Hastie, ... will have a graph like the following

Estimate a Parabola with a Line

Let's start with the simplest possible geometry: estimate parabolic data with a line. This is a clear example of underfitting.

Compute the associated mean squared error. The mean squared error is the classic criteria used to evaluate the performance of an estimate. There are downsides to it but that is beyond the scope of this blog post.

Estimate a Parabola with Higher Order Polynomials

Next, let's estimate parabolic data with higher order polynomials to demonstrate overfitting.

Notice that the right hand curve up above corresponds to to the bias variance curve of the first figure shown in this blog post (Figure 2.11 of ELS).

Use Regularization to Prevent Overfitting

The YouTube video uses dropout, weighted decay and other regularization techniques.

It then refer to the paper "Understanding Deep Learning Requires Rethinking Generalization" by C. Zhang, ... . Below are some excerpts from the paper:

  1. Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training.

    1. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice.

      Neural Networks Cause a Double Dip in Bias Variance Tradeoff

      The paper "Reconciling Modern Machine Learning and the Bias-Variance Trade-Off" by M. Belkin, .. provides the details on how neural networks increase complexity causing a double dip in the bias variance tradeoff. Below is a figure from the paper.

      Notice that the paper states that "We provide evidence for the existence and ubiquity of double descent for a wide spectrum of models and datasets, and we posit a mechanism for its emergence". In other words, it is not a mathematical proof. So, in some cases the double dip phenomena may not exist.

      The double dip phenomenon appears to contradict classical theory. It does not. It reveals that the classical U-shaped curve of test error is incomplete and that a more complex relationship exists; especially for neural networks.

      To better understand the double dip, let's use an analogy. Imagine trying to connect a few dots on a page. With a stiff ruler (a simple model), you get a poor fit. With a flexible thread (a moderately complex model), you get a good, smooth fit. With a thread that is just the right length to pass through each dot with sharp angles (the interpolation threshold), the path between dots is wild and inaccurate. But if you have a much longer, very flexible thread (a highly over-parameterized model), it can pass through all the dots while laying in smooth, gentle curves between them, leading to a better overall path.

      Below is a figure from the YouTube video that recreates the double dip using digit identification.

      Code

      To go to the code associated with the YouTube video, click here.

      Summary

      Classical data science talks about the bias-variance tradeoff. It also has regularization techniques to manage the bias-variance tradeoff. The bias-variance tradeoff isn't a problem to be reduced but a fundamental dilemma to be navigated.

      However, stating that the double dip of neural networks overcomes the bias-variance tradeoff would be an overstatement. The double dip doesn't invalidate the tradeoff. Instead, it reveals that the classical U-shaped curve of test error is incomplete and a more complex relationship exists; especially for neural networks.

      Please recall that there is no free lunch. Neural networks have their own disadvantages. They have massive computational costs as well as massive data requirements.

      Tuesday, August 19, 2025

      How Do You Create Constants in Python?

       Introduction

      "Constants" are variables whose values are intended to remain unchanged throughout the program's execution. Unlike some other programming languages, Python does not have a built-in mechanism to enforce immutability for constants. However, there are approaches that can be used to indicate that a variable is a constant.

      One approach is to rely on a widely adopted naming convention. A second approach is to use typing.Final. A third approach is to use slots. The rest of this blog will address each approarch in turn.

      Approach 1: Naming Convention Using All Capital Letters

      Pep 8 states "Constants are ... written in all capital letters with underscores separating words. Examples include MAX_OVERFLOW and TOTAL.

      The disadvantage is that it requires developers to remember to follow the naming convention and reviewers to remember to check for the naming convention. On top of that, there can be no static or runtime checking.

      The advantage is that no additional code besides declaring the "constnat" is needed.

      Approach 2: Use typing.Final

      Let's demonstrate how typing.Final can detect "constants" being modified via a code snippet. Create a file with the name constants.py with the following contents:

      from typing import Final
      
      CONSTANT_A: Final = "A"
      CONSTANT_B: Final = "B"
      
      CONSTANT_A = "Z" # Line 6
      
      

      When run "mypy constants.py", the following output is generated:

      constants.py:6: error: Cannot assign to final name "CONSTANT_A"  [misc]
      
      Found 1 error in 1 file (checked 1 source file)
      
      

      A disadvantage is that it produces no runtime errors.

      >>> from typing import Final
      
      >>> CONSTANT_A: Final = "A"
      >>> CONSTANT_B: Final = "B"
      
      >>> CONSTANT_A = "Z" # Line 6
      
      >>>
      
      

      However, if use a static type checker, the code won't pass basic QA and consequently it will never make it to production.

      The advantage of this approach is that it gurantees that "constants" cannot be modified.

      Approach 3: Slots

      Let's demonstrate how slots can be used to create constants via a working code snippet

      >>> class ConstantsNamespace:
      ...     __slots__ = ()
      ...     SOME_CONSTANT = "Hi"
      ...
      >>>
      
      

      If try to modify the constant during run time, an error will be generated

      >>> constants = ConstantsNamespace()
      
      
      >>> constants.SOME_CONSTANT = "Bye"
      
      Traceback (most recent call last):
      
        File "<python-input-8>", line 1, in <module>
      
          constants.SOME_CONSTANT = "Bye"
          ^^^^^^^^^^^^^^^^^^^^^^^
      
      AttributeError: 'ConstantsNamespace' object attribute 'SOME_CONSTANT' is read-only
      
      >>>
      
      

      The advantage is that run time errors are generated.

      The disadvantage is that the "constants" can be modified because they are class variables.

      >>> constants.SOME_CONSTANT
      'Hi'
      
      >>> ConstantsNamespace.SOME_CONSTANT = "bye"
      
      >>> constants.SOME_CONSTANT
      'bye'
      
      >>>
      
      

      If you want a quick tutorial on slots, refer to the section "Lightweight Classes With .__slots__" in the article "Python Classes: The Power of Object-Oriented Programming" by Leodanis Pozo Ramos at Real Python.

      Summary

      We have discussed 3 different approaches. I personally like approach 2 where typing.Final is used. It is simple to implement and it can be used to gurantee that a "constant variable" is never changed.

      If you are interested in an in depth presentation about Python constants, refer to the article "Python Constants: Improve Your Code's Maintainability" by Leodanis Pozo Ramos from Real Python.

      Wednesday, July 23, 2025

      Pydantic or Dataclass or Namedtuple or Just a Class with Attributes

       

      Introduction

      While working for a client, I was given code with the following structure.

      There was a SomeResults class

      >>> class SomeResults:
      ...     topic_1: list[str] | None
      ...     topic_2: list[str] | None
      ...     topic_3: list[str] | None
      ...
      >>>
      

      There was some function that had to return SomeResults

      >>> def some_fcn() -> SomeResults:
      ...     raise NotImplemented()
      ...
      >>>
      

      My task was to create the implementation. There were clear instructions on what the implementation was to output, and initial instructions were provided on how to get started. The associated details are not pertinent to this blog post and consequently have been omitted. This is the standard technique obfuscating the specific client problem.

      However, I was puzzled as to how to structure the code. Do I access object attributes directly? Do I use a namedtuple? Do I use a dataclass? Do I use Pydantic? We will explore each of these below with their associated pros and cons.

      The applicable requirements and restrictions imposed by the client are not going to be stated upfront. This way, the reader is forced to work through the thought process/approaches presented here. Anyways, sometimes everyone needs a little mystery. Also, in real life, it is rare to have all the requirements and restrictions stated upfront.

      "Optional[list[str]]" Should Replace "list[str] | None"

      The Python documentation states, "if an explicit value of None is allowed, the use of Optional is appropriate, whether the argument is optional or not". Unfortunately, one of the constraints imposed by the client was that the original declaration of SomeResults could not be modified. Consequently, we have to use "list[str] | None".

      If you are interested in the details, refer to

      Approach 1: Access Object's Attributes Directly

      The final code would look something like the following

      >>> def some_fcn() -> SomeResults:
      
      ...     topic_a = ["topic_1x", "topic_1y", "topic_1z"]
      ...     topic_b = ["topic_2x", "topic_2y", "topic_2z"]
      ...     topic_c = ["topic_3x", "topic_3y", "topic_3z"]
      
      ...     result = SomeResults()
      ...     result.topic_1 = topic_a
      ...     result.topic_2 = topic_b
      ...     result.topic_3 = topic_c
      
      ...     return result
      ...
      
      >>> print(some_fcn().topic_1)
      ['topic_1x', 'topic_1y', 'topic_1z']
      
      >>> print(some_fcn().topic_2)
      ['topic_2x', 'topic_2y', 'topic_2z']
      
      >>> print(some_fcn().topic_3)
      ['topic_3x', 'topic_3y', 'topic_3z']
      

      The computation of the topics is complex, so that it would be done separately. This is mimicked by

      topic_a = ["topic_1x", "topic_1y", "topic_1z"]
      
      topic_b = ["topic_2x", "topic_2y", "topic_2z"]
      
      topic_c = ["topic_3x", "topic_3y", "topic_3z"]
      

      Once the topics are computed, they are gathered together to create a result.

      result = SomeResults()
      
      result.topic_1 = topic_a
      result.topic_2 = topic_b
      result.topic_3 = topic_c
      

      The advantage of the above approach is that it is quick. No manual implementation of methods like __init__().

      The disadvantage of the above approach is that setting individual attributes directly using dot notation. This is not considered Pythonic. An alternative approach is to use getters and setter methods. Getters and setters are a legacy pattern from C++ that make library packaging practical and avoid recompiling the entire world. They should be avoided in Python. Another approach is to use properties. This is considered to be Pythonic. These different approaches generate much controversy. Consequently, the following links will allow you to gather information that can then be used to make a decision that is appropriate to your use case and time constraints.

      Approach 2: Use Namedtuple

      The final code would look something like the following

      >>> from collections import namedtuple
      
      >>> SomeResults = namedtuple('SomeResults', ['topic_1', 'topic_2', 'topic_3'])
      
      >>> def some_fcn() -> SomeResults:
      ...     topic_a = ["topic_1x", "topic_1y", "topic_1z"]
      ...     topic_b = ["topic_2x", "topic_2y", "topic_2z"]
      ...     topic_c = ["topic_3x", "topic_3y", "topic_3z"]
      ...     return SomeResults(
      ...         topic_1=topic_a,
      ...         topic_2=topic_b,
      ...         topic_3=topic_c
      ...     )
      ...
      
      >>> print(some_fcn().topic_1)
      ['topic_1x', 'topic_1y', 'topic_1z']
      
      >>> print(some_fcn().topic_2)
      ['topic_2x', 'topic_2y', 'topic_2z']
      
      >>> print(some_fcn().topic_3)
      ['topic_3x', 'topic_3y', 'topic_3z']
      
      >>>
      

      The advantage of namedtuples is that they are immutable. This immutability is helpful because you want to combine the results of each of the topics once at the end. You don't want to combine the results of each of the topics over and over.

      The disadvantage of namedtuples is that one has to provide a value for each topic or explicitly default each value to None (SomeResults = namedtuple('SomeResults', ['topic_1', 'topic_2', 'topic_3'], defaults=(None, None, None))). This seems trivial. Unfortunately, this client had so many topics that manually defaulting all to None was impractical.

      If you want to brush up on namedtuples, consider using the article  "Write Pythonic and Clean Code With namedtuple" by Leodanis Pozo Ramos.

      Approach 3: Use Dataclass

      The final code would look something like the following

      >>> from dataclasses import dataclass
      
      >>> @dataclass
      ... class SomeResults:
      ...     topic_1: list[str] | None
      ...     topic_2: list[str] | None
      ...     topic_3: list[str] | None
      ...
      
      >>> def some_fcn() -> SomeResults:
      ...     topic_a = ["topic_1x", "topic_1y", "topic_1z"]
      ...     topic_b = ["topic_2x", "topic_2y", "topic_2z"]
      ...     topic_c = ["topic_3x", "topic_3y", "topic_3z"]
      ...     return SomeResults(
      ...         topic_1=topic_a,
      ...         topic_2=topic_b,
      ...         topic_3=topic_c
      ...     )
      ...
      
      >>> print(some_fcn())
      SomeResults(topic_1=['topic_1x', 'topic_1y', 'topic_1z'], topic_2=['topic_2x', 'topic_2y', 'topic_2z'], topic_3=['topic_3x', 'topic_3y', 'topic_3z'])
      

      The advantage of using a dataclass is that it is a "natural" fit because SomeResults is a class primarily used for storing data. Also, it automatically generates boilerplate methods.

      The disadvantage of dataclasses is that there is no runtime data validation.

      Also, the use of a decorator might seem to violate the constraint that the original declaration of SomeResults could not be modified. The strict interpretation is that we have modified the original declaration through the use of a decorator. However, it is a local modification of the implementation for a specific purpose. As a side note, if you are ever in a situation where you can't modify the code but at the same time you have to modify the code, think decorators.

      If you want to brush up on dataclasses, consider using the article  "Data Classes in Python 3.7+ (Guide)" by Geir Arne Hjelle.

      Approach 4: Use Pydantic

      The final code would look something like the following

      >>> from pydantic import BaseModel
      
      >>> class SomeResults(BaseModel):
      ...     topic_1: list[str] | None
      ...     topic_2: list[str] | None
      ...     topic_3: list[str] | None
      ...
      
      >>> def some_fcn() -> SomeResults:
      ...     topic_a = ["topic_1x", "topic_1y", "topic_1z"]
      ...     topic_b = ["topic_2x", "topic_2y", "topic_2z"]
      ...     topic_c = ["topic_3x", "topic_3y", "topic_3z"]
      ...     return SomeResults(
      ...         topic_1=topic_a,
      ...         topic_2=topic_b,
      ...         topic_3=topic_c
      ...     )
      ...
      
      >>> print(some_fcn())
      topic_1=['topic_1x', 'topic_1y', 'topic_1z'] topic_2=['topic_2x', 'topic_2y', 'topic_2z'] topic_3=['topic_3x', 'topic_3y', 'topic_3z']
      

      This particular client was processing web pages from the internet, and so automatic runtime data validation was needed. This makes Pydantic a natural fit.

      A con would be that Pydantic introduces an external dependency. The alternative is to write, debug, and maintain the equivalent code for this particular use case yourself. Not sure how realistic that would be. Also, by introducing Pydantic to your tech stack, a lot of useful functionality becomes available like JSON conversions.

      Another con is that there is a higher overhead that arises from the validation.

      Also, notice that the original class SomeResults is modified to be a subclass of BaseModel. For this particular client, this is not just a con but a deal breaker. The original class SomeResults cannot be modified.

      If you want to brush up on Pydantic, consider using the article  "Pydantic: Simplifying Data Validation in Python" by Harrison Hoffman.

      Approach 5: Use Pydantic dataclass Decorator

      The final code would look something like the following

      >>> from pydantic.dataclasses import dataclass
      
      >>> @dataclass
      ... class SomeResults:
      ...     topic_1: list[str] | None
      ...     topic_2: list[str] | None
      ...     topic_3: list[str] | None
      ...
      
      >>> def some_fcn() -> SomeResults:
      ...     topic_a = ["topic_1x", "topic_1y", "topic_1z"]
      ...     topic_b = ["topic_2x", "topic_2y", "topic_2z"]
      ...     topic_c = ["topic_3x", "topic_3y", "topic_3z"]
      ...     return SomeResults(
      ...         topic_1=topic_a,
      ...         topic_2=topic_b,
      ...         topic_3=topic_c
      ...     )
      ...
      
      >>> print(some_fcn().topic_1)
      ['topic_1x', 'topic_1y', 'topic_1z']
      
      >>> print(some_fcn().topic_2)
      ['topic_2x', 'topic_2y', 'topic_2z']
      
      >>> print(some_fcn().topic_3)
      ['topic_3x', 'topic_3y', 'topic_3z']
      
      >>>
      

      Pydantic dataclass decorator satisfies all the requirements of the client. It supports runtime data validation. Also, no changes have been made to the original definition of the class SomeResults.

      Summary

      As shown above, there are many ways to ensure that some function returns a specific type of output. It is realized that the 5 approaches are not a thorough listing of all the possible approaches. However, they are illustrative, and there are length constraints imposed by people casually reading blog posts.

      We started with "Approach 1," which is the simplest. We then used namedtuples. Unfortunately, this particular client could not use them because they needed the ability to default values to None. This forced us to move on to dataclasses. However, this particular client needed runtime data validation and so Pydantic was needed. We still did not meet the client's requirements because we modified the original class SomeResults. We then used Pydantic's dataclass decorator so that we did not have to modify the class SomeResults.

      Sunday, March 2, 2025

      Identify Nouns & Verbs in Text Using Natural Language Toolkit (NLTK)

      Introduction

      One of the common tasks that I have to do is identify nouns and verbs in text. If I am doing object oriented programming, the nouns will help in deciding which classes to create. Even if I am not doing object oriented programming, having a list of nouns and verbs will facilitate understanding.

      Python Advanced Concepts

      Some of the Python code used might be considered "advanced".

      For example, we will use a list of tuples. In my opinion, going for ["txt_1", "txt_2"] to [("txt_1a", "txt_1b"), ("txt_2a", "txt_2b")] is no big deal. Refer to output 2 for how it applies in this write-up.

      Also, Counter() is used to count the number of times that a particular tuple occurs. Refer to output 3 for how it applies in this write-up.

      The contents of Counter() is a dictionary of dictionaries. The key of the inner dictionary is a tuple. Refer to output 3 for how it applies in this write-up.

      Natural Language Toolkit (NLTK)

      NLTK is used for tokenization and then assigning parts of speech (POS) to those tokens. Unfortunately, NLTK is large and complex. The reference section at the end provides links and short text snippets for the applicable NLTK components.

      Computing Environment

      Python version 3.12.9 is used.

      NLTK version 3.9.1 is used.

      I used Anaconda to install NLTK. It did not install nltk.download() by default because it takes approximately 2.5 GB. Consequently, I had to install punkt_tab in order to get nltk.word_tokenize() to work.

      >>> import nltk
      
      >>> nltk.download('punkt_tab')
      

      Similarly, I had to install averaged_perceptron_tagger_eng (Source code for nltk.tag.perceptronclass nltk.tag.perceptron.AveragedPerceptron) in order to get nltk.pos_tag() to work.

      >>> import nltk
      
      >>> nltk.download('averaged_perceptron_tagger_eng')

      Software Structure

      To go to the file containing the working code click here.

      The structure of the code is as follows

      1. Create text data

        1. Process the text to tokenize it, assign a POS to it, and then count the combination of tokens and POS.

          1. Filter data such that only nouns and verbs remain.

            1. Sort data such that the output has desired format.

              Create Text Data

              The test data is created as follows

              1. Code Snippet 1

                TEXT_1 = """
                This is chunk one of the text.
                
                The leaves on a tree are green.
                
                The leaves on a tree aren't red.
                """
                
                TEXT_2 = """
                This is chunk two of the text.
                
                The color of the car is red.
                
                The color of the car also contains blue.
                
                The color of the car isn't green.
                """
                
                LIST_OF_STRINGS = [TEXT_1, TEXT_2]
                
                

              TEXT_1 and TEXT_2 are easily modifiable so that they can be altered by reader with their own test data. Also, it is easy to expand the data to use TEXT_3, TEXT_4 and so on because the processing is based on a list of text contained in LIST_OF_STRINGS.

              Process Text: tokenize, assign POS to tokens, count combinations of token and POS

              The processing of text is accomplished with the following code snippet.

              1. Code Snippet 2

                # For the combination of a word and its part of speech,
                # create a count of its associated appearance
                for i, string in enumerate(LIST_OF_STRINGS):
                
                    # Decompose string into tokens
                    string_tokenized = nltk.word_tokenize(string)
                
                    # Assign part of speech to each token
                    string_pos_tag = nltk.pos_tag(string_tokenized)
                
                    # Count the number of token, parts of speech combinations
                    if i == 0:
                        count_token_pos = Counter(string_pos_tag)
                    else:
                        count_token_pos.update(Counter(string_pos_tag))
                
                

              Tokenize

              Tokenization is accomplished via nltk.word_tokenize().

                Its output is a list of strings.

                1. Output 1

                  ['This', 'is', 'chunk', ... 'are', "n't", 'red', '.']
                  
                  

                Since the nature of the material is introductory, will not worry about edge cases like contractions ( "n't").

                  Assign POS to Tokens

                  Association of a POS w/ a token is accomplished via nltk.pos_tag().

                    Its output is an array of tuples where each tuple consists of a token and a POS.

                    1. Output 2

                      [('This', 'DT'), ('is', 'VBZ'), ('chunk', 'JJ'), ... ("n't", 'RB'), ('red', 'JJ'), ('.', '.')]
                      String POS Ta
                      
                      

                    'DT' stands for determiner. 'VBZ' stands for a verb that is third person singular present. For the full list, refer to Alphabetical list of part-of-speech tags used in the Penn Treebank Project at UPenn.Edu

                    Count Combinations of Token and POS

                    Counter() is used to count the number of occurences a combination of token and POS occurs.

                      The output of Counter(string_pos_tag) is

                      1. Output 3 (Just for processing TEXT_1)

                        Counter(
                        
                            {
                        
                                ('.', '.'): 3, 
                        
                                ('The', 'DT'): 2, 
                        
                                ('leaves', 'NNS'): 2, 
                        
                                ...
                        
                                ('green', 'JJ'): 1, 
                        
                                ("n't", 'RB'): 1, 
                        
                                ('red', 'JJ'): 1
                        
                            }
                        
                        )
                        
                        

                      Notice that the contents of Counter() is a dictionary of dictionaries. The key of the inner dictionary is a tuple. The tuple key consists of a token and a POS as shown in output 2. The integer is simply the number of times that the tuple occurs.

                      Filter Data for Nouns & Verbs

                      The first step is to identify the POS that correspond to nouns and verbs.

                        This is implemented via the following code snippet.

                        1. Code Snippet 3

                          NOUN_POS = ["NN", "NNS", "NNP", "NNPS"]
                          
                          VERB_POS = ["VB", "VBD", "VBG", "VBN", "VBP", "VBZ"]
                          
                          

                        'NN' stands for a noun which is singular. 'NNS' stands for a noun which is plural. For the full list, refer to Alphabetical list of part-of-speech tags used in the Penn Treebank Project at UPenn.Edu

                        The filtering is implemented via the following code snippet.

                        1. Code Snippet 4

                          list_noun_count = []
                          list_verb_count = []
                          for token_pos in count_token_pos:
                              if token_pos[1] in NOUN_POS:
                                  list_noun_count.append((token_pos[0], count_token_pos[token_pos]))
                              elif token_pos[1] in VERB_POS:
                                  list_verb_count.append((token_pos[0], count_token_pos[token_pos]))
                          
                          

                        In the above code, there is nothing worthy of note. Just an if statement inside a for loop. If people need a refresher on iterating through a dictionary, consider reading "How to Iterate Through a Dictionary in Python" by Leodanis Pozo Ramos.

                        The output of this step is provided below.

                        1. Output 4.A: List of nouns and their counts

                          [ ('text', 2), ('leaves', 2), ('tree', 2), ('color', 3), ('car', 3) ]
                          
                          
                        2. Output 4.B: List of verbs and their counts

                          [ ('is', 4), ('are', 2), ('contains', 1) ]
                          
                          

                        Notice that the output is a list of tuples. Also, each tuple consists of a string and an integer. This will be important when the lists are sorted.

                        Sort Data & Generate Output

                        One set of output will be nouns and their associated counts. This needs to be sorted either by the nouns alphabetically or by their counts. Otherwise, the data will be in some sort of random order. Similarly, another set of output will be be verbs and their counts which will also need to be sorted.

                        This is implemented via the following code snippet.

                        1. Code Snippet 5

                          # Sort data alphabetically
                          list_noun_count.sort()
                          list_verb_count.sort()
                          
                          with open("noun_counts_alphabetized_by_noun.txt", "w", encoding="utf8") as file:
                              for noun_count in list_noun_count:
                                  file.write(noun_count[0] + ", " + str(noun_count[1]) + "\n")
                          
                          with open("verb_counts_alphabetized_by_verb.txt", "w", encoding="utf8") as file:
                              for verb_count in list_verb_count:
                                  file.write(verb_count[0] + ", " + str(verb_count[1]) + "\n")
                          
                          # Sort data by their counts
                          list_noun_count.sort(key=lambda noun_count: noun_count[1], reverse=True)
                          list_verb_count.sort(key=lambda noun_count: noun_count[1], reverse=True)
                          
                          with open("noun_counts_by_increasing_count.txt", "w", encoding="utf8") as file:
                              for noun_count in list_noun_count:
                                  file.write(noun_count[0] + ", " + str(noun_count[1]) + "\n")
                          
                          with open("verb_counts_by_increasing_count.txt", "w", encoding="utf8") as file:
                              for verb_count in list_verb_count:
                                  file.write(verb_count[0] + ", " + str(verb_count[1]) + "\n")
                          
                          

                        The first thing to note is that sorting a list is done in place. Consequently, can just use list_noun_count.sort(). This is different from when use the built-in function sorted() because it returns a new sorted list.

                        The second thing to note is that specified a key for sorting by count via lambda noun_count: noun_count[1]. Reacall from Output 4.A and 4.B that we are sorting a list of tuples. If people need a refresher on sorting, consider reading "How to Use sorted() and .sort() in Python" by David Fundakowski.

                        Lastly, please make a habit of specifying utf8 as the encoding when creating a file. Unfortunately, Python uses the OS default for encoding [locale.getencoding()]. This is especially important when using the Windows operating system because it defaults to cp1252.

                        The final output is provided below.

                        1. Output 5.A: Nouns sorted by count

                          car, 3
                          color, 3
                          leaves, 2
                          text, 2
                          tree, 2
                          
                          
                        2. Output 5.B: Verbs sorted by count

                          is, 4
                          are, 2
                          contains, 1
                          
                          

                        Alternative Approaches

                        An alternative to NLTK is spaCy. If Google "Identify Nouns & Verbs in Text Using spaCy" will obtain results which will help with starting the journey.

                        Summary

                        Determining nouns and verbs is simple. :-)

                        1. Tokenize using nltk.word_tokenize().

                        2. Associated a POS with each token using nltk.pos_tag().

                        3. Count the number of times that a combination of token and POS occur by using the Python Counter module.

                        4. Filter such that only noun and verb POS remain.

                        5. Sort the data as needed to generate the desired output.

                        However, as is commonly said, the devil is in the details. If you are interested in the details, check out the reference section below.

                        References

                        1. Articles

                          1. A Good Part-of-Speech Tagger in about 200 Lines of Python by Matthew Honnibal

                            1. Alphabetical list of part-of-speech tags used in the Penn Treebank Project at UPenn.Edu

                              1. Tag: NN

                                1. Description: Noun, singular or mass

                                2. ...

                                3. How to Iterate Through a Dictionary in Python by Leodanis Pozo Ramos

                                  1. How to Use sorted() and .sort() in Python by David Fundakowski

                                    1. NLTK Lookup Error at Stack Over Flow

                                      1. First answer said the missing module is 'the Perceptron Tagger', actually its name in nltk.download is 'averaged_perceptron_tagger'

                                        1. You can use this to fix the error

                                          1. nltk.download('averaged_perceptron_tagger')

                                        2. What to download in order to make nltk.tokenize.word_tokenize work? at Stack Over Flow

                                          1. At home, I downloaded all nltk resources by nltk.download() but, as I found out, it takes ~2.5GB.

                                            1. Try to download ... punkt_tab

                                              1. import nltk

                                                1. nltk.download('punkt_tab')

                                            2. Books

                                            3. Natural Language Toolkit (NLTK)

                                              1. NLTK Documentation

                                                1. All modules for which code is available (_modules)

                                                  1. nltk

                                                    1. Source code for nltk

                                                      1. nltk.tag

                                                        1. Source code for nltk

                                                          1. nltk.tag.perceptron

                                                            1. Source code for nltk.tag.perceptron

                                                              TAGGER_JSONS = {
                                                              
                                                                  "eng": {
                                                              
                                                                      "weights": "averaged_perceptron_tagger_eng.weights.json",
                                                              
                                                                      "tagdict": "averaged_perceptron_tagger_eng.tagdict.json",
                                                              
                                                                      "classes": "averaged_perceptron_tagger_eng.classes.json",
                                                              
                                                                  },
                                                              
                                                              
                                                      2. NLTK API

                                                        1. Sub-Packages

                                                          1. nltk.downloader module

                                                            1. Introduction

                                                              1. The NLTK corpus and module downloader. This module defines several interfaces which can be used to download corpora, models, and other data packages that can be used with NLTK.

                                                              2. Downloading Packages

                                                                1. Individual packages can be downloaded by calling the download() function with a single argument, giving the package identifier for the package that should be downloaded:

                                                                  1. >>> download('treebank')

                                                                    1. [nltk_data] Downloading package 'treebank'...

                                                                      1. [nltk_data] Unzipping corpora/treebank.zip.

                                                                    2. Downloader

                                                                2. Sub-Packages

                                                                  1. nltk.tag package

                                                                    1. Module Contents

                                                                      1. NLTK Taggers

                                                                        1. This package contains classes and interfaces for part-of-speech tagging, or simply “tagging”.

                                                                          1. A “tag” is a case-sensitive string that specifies some property of a token, such as its part of speech. Tagged tokens are encoded as tuples (tag, token). For example, the following tagged token combines the word 'fly' with a noun part of speech tag ('NN'):

                                                                            1. >>> tagged_tok = ('fly', 'NN')

                                                                            2. An off-the-shelf tagger is available for English. It uses the Penn Treebank tagset:

                                                                              1. >>> from nltk import pos_tag, word_tokenize

                                                                                1. >>> pos_tag(word_tokenize("John's big idea isn't all that bad."))

                                                                                  1. [('John', 'NNP'), ("'s", 'POS'), ('big', 'JJ'), ('idea', 'NN'), ('is', 'VBZ'), ("n't", 'RB'), ('all', 'PDT'), ('that', 'DT'), ('bad', 'JJ'), ('.', '.')]

                                                                                2. pos_tag()

                                                                                  1. NB. Use pos_tag_sents() for efficient tagging of more than one sentence.

                                                                                  2. pos_tag_sents()

                                                                                  3. Sub-Modules

                                                                                3. nltk.tokenize package

                                                                                  1. Module Contents

                                                                                    1. NLTK Tokenizer Package

                                                                                      1. Tokenizers divide strings into lists of substrings. For example, tokenizers can be used to find the words and punctuation in a string:

                                                                                        1. >>> from nltk.tokenize import word_tokenize

                                                                                          1. >>> s = '''Good muffins cost $3.88\nin New York. Please buy me ... two of them.\n\nThanks.'''

                                                                                            1. >>> word_tokenize(s)

                                                                                              1. ['Good', 'muffins', 'cost', '$', '3.88', 'in', 'New', 'York', '.', 'Please', 'buy', 'me', 'two', 'of', 'them', '.', 'Thanks', '.']

                                                                                                1. This particular tokenizer requires the Punkt sentence tokenization models to be installed.

                                                                                              2. word_tokenize()

                                                                                              3. Sub-Modules

                                                                                                1. nltk.tokenize.punkt module

                                                                                                  1. Punkt Sentence Tokenizer

                                                                                                    1. This tokenizer divides a text into a list of sentences by using an unsupervised algorithm to build a model for abbreviation words, collocations, and words that start sentences. It must be trained on a large collection of plaintext in the target language before it can be used.

                                                                                                      1. The NLTK data package includes a pre-trained Punkt tokenizer for English.

                                                                                                      2. save_punkt_params()

                                                                                        2. Python

                                                                                          1. Binary Data Services

                                                                                            1. codecs - Codec registry and base classes

                                                                                              1. Standard Encodings

                                                                                                1. Codec: cp1252 - Aliases: windows-1252 - Language: Western Europe

                                                                                                  1. Codec: utf_8 - Aliases: U8, UTF, utf8, cp65001 - Language: all languages

                                                                                              2. Glossary

                                                                                                1. locale encoding

                                                                                                  1. On Windows, it is the ANSI code page (ex: "cp1252").

                                                                                                2. Standard Library

                                                                                              3. spaCy