Thursday, December 22, 2016

Jumping into Spark (JIS): Python / Spark / Logistic Regression (Update 1)

In this blog we will use the Python interface to Spark to determine whether or not someone makes more or less than $50,000. The logistic regression model will be used to make this determination.

Please note that no prior knowledge of Python, Spark or logistic regression is required. However, it is assumed that you are a seasoned software developer. 

There are various tutorials on Spark. There is the official documentation on Spark. However, if you are an experienced software professional and want to just jump in and kick the tires, there doesn't seem to be much available. Well, at least I couldn't find any.

Yes, there is Spark's Quick Start. Also, there are artifacts like databricks User Guide. Unfortunately, they have a smattering of stuff. You really don't get a chance to jump in.

Let me now explicitly define jumping in. Jumping in involves solving an almost trivial problem that demonstrates a good deal of the concepts and software. Also, it involves skipping steps. Links will be provided so that if someone is interested in the missing steps, they can look up the details. If skipping steps bothers you, immediately stop and go read a tutorial.

As mentioned earlier, we are going to use Python on Spark to address logistic regression for our jumping in. This is our almost trivial problem that demonstrates a good deal of the concepts and software.

The first thing that you need is a working environment. You could install and configure your own environment. However, that would not be in line with jumping in. Instead I recommend using databricks Spark community edition (DSCE). If you are using DSCE, refer to "Welcome to Databricks" on how to create a cluster, create a notebook, attach the notebook to a cluster and actually use the notebook.

Next, you need to connect to the Spark environment. In the database world, you create a database connection. In the Spark world, you create a context (SQLContextSparkContext/SparkSession). If you are using DSCE, the following three variables will be predefined for you
  1. SparkContext: sc
    1. SparkSession: spark
      1. SQLContext: sqlContext
      If you would like the IPython notebook associated with this blog, click here. If for some reason you don't have software to read the IPython notebook, you can download a pdf version of it by clicking here.


      Row / Column


      We are almost ready to code. First we have to talk about Row and DataFrame. A Row is just like a row in a spreadsheet or a row in a table. A Dataframe is just a collection of Row's. For the most part, you can think of a DataFrame as a table.

      Now for the first code snippet. Please note that I took the code snippet from Encode and assemble multiple features in PySpark at StackOverFlow.Com.

      Code Snippet 1
      
      
      from pyspark.sql import Row
      from pyspark.mllib.linalg import DenseVector
      
      row = Row("gender", "foo", "bar")
      
      dataFrame = sc.parallelize([
        row("0",  3.0, DenseVector([0, 2.1, 1.0])),
        row("1",  1.0, DenseVector([0, 1.1, 1.0])),
        row("1", -1.0, DenseVector([0, 3.4, 0.0])),
        row("0", -3.0, DenseVector([0, 4.1, 0.0]))
      ]).toDF()
      
      dataFrame.collect()

      Nothing fancy. You create a row which has column names gender, foo and bar. You then create a bunch of row's with actual data. Lastly, you group the row's into a DataFrame. DenseVector was used to demonstrate that a cell in a Row can have a complex data structure. If you are curious about parallelize and toDF, check the references at the end of the blog. This will be true for the rest of the blog. If you are not sure what some magic word means, go to the reference section at the end of the blog.

      If things are working, you should get an output like that shown below.

      Output of Code Snippet 1

      [Row(gender=u'0', foo=3.0, bar=DenseVector([0.0, 2.1, 1.0])),
       Row(gender=u'1', foo=1.0, bar=DenseVector([0.0, 1.1, 1.0])),
       Row(gender=u'1', foo=-1.0, bar=DenseVector([0.0, 3.4, 0.0])),
      Row(gender=u'0', foo=-3.0, bar=DenseVector([0.0, 4.1, 0.0]))]

      Things will now begin to get interesting. Next we are going to look at StringIndexerOneHotEncoder and VectorAssembler. These are the items needed to allow algorithms which expect continuous features, such as Logistic Regression, to use categorical features.

      StringIndexer


      A StringIndexer converts categories to numbers. The numbers have a range from 0 to number of categories minus one. The most frequent category gets a number of zero, the second most frequent category gets a number of 1 and so on. We are going to use the code snippets from Preserve index-string correspondence spark string indexer from StackOverFlow.Com to demonstrate what the preceding English means.

      Let's actually create a StringIndexer and use it to map/fit/transform categories to numbers.

      Code Snippet 2

      dataFrame = sqlContext.createDataFrame(
          [(0, "a"), (1, "b"), (2, "b"), (3, "c"), (4, "c"), (5, "c"), (6,'d'), (7,'d'), (8,'d'), (9,'d')],
          ["id", "category"])
      
      dataFrame.collect()
      
      from pyspark.ml.feature import StringIndexer
      stringIndexer = StringIndexer(inputCol="category", outputCol="categoryIndex")
      modelStringIndexer = stringIndexer.fit(dataFrame)
      transformedDataFrame = modelStringIndexer.transform(dataFrame)
      transformedDataFrame.collect()
      

      Output of Code Snippet 2

      [Row(id=0, category=u'a', categoryIndex=3.0),
       Row(id=1, category=u'b', categoryIndex=2.0),
       Row(id=2, category=u'b', categoryIndex=2.0),
       Row(id=3, category=u'c', categoryIndex=1.0),
       Row(id=4, category=u'c', categoryIndex=1.0),
       Row(id=5, category=u'c', categoryIndex=1.0),
       Row(id=6, category=u'd', categoryIndex=0.0),
       Row(id=7, category=u'd', categoryIndex=0.0),
       Row(id=8, category=u'd', categoryIndex=0.0),
       Row(id=9, category=u'd', categoryIndex=0.0)]
      

      Notice how d's got 0.0 because they are the most numerous. The letter c's got 1.0 because they are the second most numerous. And so on. The code snippet below will make this more clear.

      Code Snippet 3

      transformedDataFrame.select('category','categoryIndex').distinct().orderBy('categoryIndex').show()
      

      Output of Code Snippet 3

      +--------+-------------+
      |category|categoryIndex|
      +--------+-------------+
      |       d|          0.0|
      |       c|          1.0|
      |       b|          2.0|
      |       a|          3.0|
      +--------+-------------+

      OneHotEncoder


      A OneHotEncoder converts category numbers to binary vectors with at most a single one-value per row. For a true understanding of one-hot encoding, refer to the associated Wikipedia page.

      Next, let's use a OneHotEncoder it to transform the category index that we created earlier to a binary vector.

      Code Snippet 4

      from pyspark.ml.feature import OneHotEncoder
      oneHotEncoder = OneHotEncoder(inputCol="categoryIndex", outputCol="categoryVector")
      oneHotEncodedDataFrame = oneHotEncoder.transform(transformedDataFrame)
      oneHotEncodedDataFrame.show()
      

      Output of Code Snippet 4

      +---+--------+-------------+--------------+
      | id|category|categoryIndex|categoryVector|
      +---+--------+-------------+--------------+
      |  0|       a|          3.0|     (3,[],[])|
      |  1|       b|          2.0| (3,[2],[1.0])|
      |  2|       b|          2.0| (3,[2],[1.0])|
      |  3|       c|          1.0| (3,[1],[1.0])|
      |  4|       c|          1.0| (3,[1],[1.0])|
      |  5|       c|          1.0| (3,[1],[1.0])|
      |  6|       d|          0.0| (3,[0],[1.0])|
      |  7|       d|          0.0| (3,[0],[1.0])|
      |  8|       d|          0.0| (3,[0],[1.0])|
      |  9|       d|          0.0| (3,[0],[1.0])|
      +---+--------+-------------+--------------+

      SparseVector

      The column categoryVector is a SparseVector. It has 3 parts. The first part is the length of the vector. The second part are the indicies which contain values. The third part are the actual values. Below is a code snippet demonstrating this.

      Code Snippet 5

      from pyspark.mllib.linalg import SparseVector
      v1 = SparseVector(5, [0,3], [10,9])
      for x in v1:
        print(x)
      

      Output of Code Snippet 5

      10.0
      0.0
      0.0
      9.0
      0.0
      

      Notice how category a (categoryVector = (3,[],[])) is not included because it makes the vector entries sum to one and hence linearly dependent. The code snippet below will provide a better visual for this.

      Code Snippet 6

      oneHotEncodedDataFrame.select('category','categoryIndex', 'categoryVector').distinct().orderBy('categoryIndex').show()
      

      Output of Code Snippet 6

      +--------+-------------+--------------+
      |category|categoryIndex|categoryVector|
      +--------+-------------+--------------+
      |       d|          0.0| (3,[0],[1.0])|
      |       c|          1.0| (3,[1],[1.0])|
      |       b|          2.0| (3,[2],[1.0])|
      |       a|          3.0|     (3,[],[])|
      +--------+-------------+--------------+
      

      VectorAssembler

      By this point you are probably getting impatient. Luckly, we have just one more item to cover before we get to logistic regression. That one item is the VectorAssembler. A VectorAssembler just concatenates columns together. As usual, we will demonstrate what the words mean via a code snippet.

      Code Snippet 7

      from pyspark.ml.feature import VectorAssembler
      dataFrame_1 = spark.createDataFrame([(1, 2, 3), (4,5,6)], ["a", "b", "c"])
      vectorAssembler = VectorAssembler(inputCols=["a", "b", "c"], outputCol="features")
      dataFrame_2 = vectorAssembler.transform(dataFrame_1)
      dataFrame_2.show()

      Output of Code Snippet 7

      +---+---+---+-------------+
      |  a|  b|  c|     features|
      +---+---+---+-------------+
      |  1|  2|  3|[1.0,2.0,3.0]|
      |  4|  5|  6|[4.0,5.0,6.0]|
      +---+---+---+-------------+

      Logistic Regression

      We have now learned enough Spark to look at a specific problem involving logistic regression. We are going to work through the example provided in the databricks documentation.

      Drop / Create Table
      1. Drop Table
        %sql 
        DROP TABLE IF EXISTS adult
        
        1. Create Table
          %sql
          
          CREATE TABLE adult (
            age               DOUBLE,
            workclass         STRING,
            fnlwgt            DOUBLE,
            education         STRING,
            education_num     DOUBLE,
            marital_status    STRING,
            occupation        STRING,
            relationship      STRING,
            race              STRING,
            sex               STRING,
            capital_gain      DOUBLE,
            capital_loss      DOUBLE,
            hours_per_week    DOUBLE,
            native_country    STRING,
            income            STRING)
          USING com.databricks.spark.csv
          OPTIONS (path "/databricks-datasets/adult/adult.data", header "true")
          
          
        Convert table to a DataFrame
        dataset = spark.table("adult")
        Get a list of columns in original dataset
        cols = dataset.columns
        1. This step has to be done here and not later. Unfortunately, the databricks examples re-uses the variable dataset.
        Perform One Hot Encoding on columns of interest
        from pyspark.ml import Pipeline
        from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorAssembler
        
        categoricalColumns = ["workclass", "education", "marital_status", "occupation", "relationship", "race", "sex", "native_country"]
        stages = [] # stages in our Pipeline
        for categoricalCol in categoricalColumns:
          stringIndexer = StringIndexer(inputCol=categoricalCol, outputCol=categoricalCol+"Index")
          encoder = OneHotEncoder(inputCol=categoricalCol+"Index", outputCol=categoricalCol+"classVec")
          # Add stages.  These are not run here, but will run all at once later on.
          stages += [stringIndexer, encoder]
        
        
        Create a StringIndexer on income
        
        
        label_stringIdx = StringIndexer(inputCol = "income", outputCol = "label")
        stages += [label_stringIdx]
        
        
        Combine feature columns into a single vector column using VectorAssembler
        
        
        numericCols = ["age", "fnlwgt", "education_num", "capital_gain", "capital_loss", "hours_per_week"]
        assemblerInputs = map(lambda c: c + "classVec", categoricalColumns) + numericCols
        assembler = VectorAssembler(inputCols=assemblerInputs, outputCol="features")
        stages += [assembler]
        Put data through all of the feature transformations using the stages in the pipeline
        pipeline = Pipeline(stages=stages)
        pipelineModel = pipeline.fit(dataset)
        dataset = pipelineModel.transform(dataset)
        
        
        Keep relevant columns
        
        selectedcols = ["label", "features"] + cols
        dataset = dataset.select(selectedcols)
        display(dataset)
        
        
        Split data into training and test sets
        
        
        (trainingData, testData) = dataset.randomSplit([0.7, 0.3], seed = 100)
        print trainingData.count()
        print testData.count()
        1. Set seed for reproducibility
        Train logistic regression model and then make predictions
        from pyspark.ml.classification import LogisticRegression
        lr = LogisticRegression(labelCol="label", featuresCol="features", maxIter=10)
        lrModel = lr.fit(trainingData)
        predictions = lrModel.transform(testData)
        selected = predictions.select("prediction", "age", "occupation")
        display(selected)
        
        1. Output
          prediction  age occupation
          ----------  --- --------------
          ...
          0           20  Prof-specialty
          1           35  Prof-specialty
          ...
          
          
          1. So, a prediction of 0 means that the person earns <=50K. While a prediction of 1 means that the person earns >50K

          Summary

          In summary, we jumped in by using Python on Spark to address logistic regression. We did not read any tutorials. We did skip steps. However, links are provided in the reference section to fill in the steps if the reader so desires.

          The advantage of jumping in is that you learn by solving a problem. Also, you don't spend weeks or months learning material like that listed in the references below before you can do anything. The disadvantage is that steps are skipped and full understanding of even the provided steps won't be present. It is realized that jumping in is not for everybody. For some people, standard tutorials are the way to begin.

          References

          Thursday, September 29, 2016

          Can a Purely State Based Database Schema Migration Be Trusted? by Robert Lucente

          A really good article on the tradeoffs between state based vs migration based schema transformation is titled Critiquing two different approaches to delivering databases: Migrations vs state by Alex Yates on 6/18/2015. The blog Database Schema Migration by S. Lott on 9/27/2016 ends with "It's all procedural migration. I'm not [sure] declarative ("state") tools can be trusted beyond discerning the changes and suggesting a possible migration."

          Let me start by examining the statement of the problem: state based vs migration based schema transformation. The problem statement implies that the solution is an either or type of thing. Why not both?

          As opposed to speaking in generalities, let me pick a specific problem which is often encountered in real systems which demonstrates the core issue. Also, let me pick a particular tool set to execute on this particular problem.

          The specific problem involves
          1. Adding some domain table and its associated data.
            1. Adding a not null / mandatory column to some table that already has data.
              1. Creating a foreign key between the domain table and the new mandatory column.
                Below is a picture for the above words. The "stuff" in color are the tables and columns being added.


                In a migration based schema approach the following steps would have to be performed
                1. SomeTable exists with data.
                  1. A new domain table (SomeDomain) gets created.
                    1. The new domain table (SomeDomain) gets populated with data.
                      1. A new nullable / not mandatory column SomeDomainUuid is added to SomeTable.
                        1. Column SomeDomainUuid in SomeTable gets populated with data.
                          1. Column SomeDomainUuid in SomeTable is made not null / mandatory.
                            1. A foreign key is created between the two tables.
                              Notice that the above is very labor intensive and involves 7 steps. Software should be able to figure out all the steps and their sequences except for the following
                              1. The new domain table (SomeDomain) gets populated with data.
                                1. Column SomeDomainUuid in SomeTable gets populated with data.
                                  The key thing to notice is that the steps that can't be automated involve data. There is no way for the software to know about the specifics of the data.
                                    Now that we have defined a specific problem, let's execute on solving the problem using a specific tool set. I am going to use the Visual Studio SQL Server Database Project concept with SQL Server as the target database. Via a series of clicks, the end state of the database is specified in the "Visual Studio SQL Server Database Project".
                                      Next we write a script (Populate_dbo_SomeDomain.sql) to populate SomeDomain table with data.
                                      
                                      
                                      INSERT INTO dbo.SomeDomain
                                          VALUES 
                                          (newid(), 'Fred'), (newid(), 'Barney');
                                      
                                      
                                      
                                      The second step is to write a script (Update_dbo_Deal_AdvertiserTypeUuid.sql) to populate SomeDomainUuid column in SomeTable.
                                      
                                      
                                      UPDATE [dbo].[SomeTabe] 
                                      SET    [SomeDomainUuid] = (SELECT SomeDomainUuid
                                                                FROM   [dbo].[SomeDomain] 
                                                                WHERE  Name = 'Fred');
                                      
                                      
                                      The last preparation step is to write a script (Script.PostDeployment.sql) to run the above 2 scripts after the state based change has happened.
                                      
                                      
                                      :r ".\Populate_dbo_SomeDomain.sql"
                                      go
                                      
                                      :r ".\Update_dbo_Deal_AdvertiserTypeUuid.sql"
                                      go
                                      
                                      
                                      
                                      Now that the desired end state has been specified as well as the data manipulation scripts written, it is time to modify a database. The Microsoft terminology for this is "publishing the database project". There will be an issue because the state changes will be made and then the Script.PostDeployment.sql script will be run. In between the state changes and the script being run, there will need to be data in the SomeDomainUuid column in the table SomeTable. This issue is addressed by using the GenerateSmartDefaults option when publishing the database project.

                                      Let's summarize what this combination of state based and migration based schema transformation has allowed us to do. We were able to take 7 steps and reduce it down to 2 steps. These 2 steps couldn't be automated anyways because they involved data. These are the pros. The con is that have to be familiar with the framework and select the GenerateSmartDefaults option out of the 60 plus available options.

                                      In conclusion, a purely state based approach can't work because of data. There is no way for the software to know how to do data migrations. Only humans know the correct way to do data migrations. In our example, there is no way for the software to know whether or not the new column SomeDomainUuid is to be initially populated with "Fred" or "Barney". This is a long winded and nice way of saying that a purely state based database schema migration can't be trusted. However, the combination of state based and migration based can truly improve productivity.

                                      Saturday, February 27, 2016

                                      Gentle Introduction to Various Math Stuff

                                      I recently attended a meetup in Pittsburgh titled Analytics of Social Progress: When Machine Learning meets Human Problems given by Amy Hepner. She did an outstanding job of introducing some math concepts very simply and intuitively. If you are interested in the slides showing how she did this, you can go to her web site by clicking here.

                                      This got me thinking about how I could help others with simple and intuitive ways to explain math stuff. I get annoyed when people say "doubly differentiable" as opposed to no gaps and no kinks. What makes this difficult is that everyone is at a different level and so you won't be able to please most people. However, I figure, any help is better than no help at all.

                                      As expected, there is already plenty of material on the internet. For statistics, a good place to start is "The Most Comprehensive Review of Comic Books Teaching Statistics by Rasmus Baath." A second good place to look is "A Brief Review of All Comic Books Teaching Statistics by Rasmus Baath and Christian Robert." For links to the book themselves, see the list below.
                                      1. The Cartoon ...
                                        1. The Cartoon Guide to Statistics by Larry Gonick, Woollcott Smith
                                        2. The Cartoon Introduction to Statistics by Grady Kelin, Alan Dabney
                                      2. Manga Guide to Statistics by Shin Takahashi
                                      3. ... for Dummies
                                        1. Biostatistics For Dummies by John Pezzullo
                                        2. Business Statistics For Dummies by Alan Anderson
                                        3. Predictive Analytics For Dummies by Anasse Bari
                                        4. Probability For Dummies by Deborah J. Rumsey
                                        5. Psychology Statistics For Dummies by Donncha Hanna
                                        6. Statistical Analysis with Excel For Dummies by Joseph Schmuller
                                        7. Statistics Essentials For Dummies by Deborah J. Rumsey
                                        8. Statistics for Big Data For Dummies by Alan Anderson
                                        9. Statistics For Dummies by Deborah J. Rumsey
                                        10. Statistics II For Dummies by Deborah J. Rumsey
                                        11. Statistics Workbook For Dummies by Deborah J. Rumsey
                                        12. Statistics: 1,001 Practice Problems For Dummies (+ Free Online Practice) by Consumer Dummies
                                       For other gentle introduction to other math stuff, you can check out the list below. People have complained that the list below is too long. My response is if you are not willing to spend 10 minutes to skim through the list, you are not ready to make the commitment to upgrade your math skills.
                                      1. Algebra ...
                                        1. Algebra I For Dummies by Mary Jane Sterling
                                        2. Algebra I Essentials For Dummies by Mary Jane Sterling
                                        3. Algebra I Workbook For Dummies by Mary Jane Sterling
                                        4. Algebra II For Dummies by Mary Jane Sterling
                                        5. Algebra II Workbook For Dummies by Mary Jane Sterling
                                        6. Algebra II: 1,001 Practice Problems For Dummies (+ Free Online Practice) by Mary Jane Sterling
                                      2. Basic Math and Pre-Algebra ...
                                        1. Basic Math and Pre-Algebra For Dummies by Mark Zegarelli
                                        2. Basic Math and Pre-Algebra: 1,001 Practice Problems For Dummies (+ Free Online Practice) by Mark Zegarelli
                                      3. Calculus ...
                                        1. Calculus For Dummies by Mark Ryan
                                        2. Calculus II For Dummies by Mark Zegarelli
                                        3. Calculus Essentials For Dummies by Mark Ryan
                                        4. Calculus Workbook For Dummies by Mark Ryan
                                        5. Calculus: 1,001 Practice Problems For Dummies (+ Free Online Practice) by Patrick Jones
                                      4. Complete Idiot's Guide to Algebra Word Problems by Izolda Fotiyeva
                                      5. Cartoon Guide ...
                                        1. Cartoon Guide to Calculus by Larry Gonick
                                        2. Cartoon Guide to Physics by Larry Gonick
                                      6. Data ...
                                        1. Data Mining For Dummies by Meta S. Brown
                                        2. Data Science For Dummies by Lillian Pierson
                                        3. Data Smart: Using Data Science to Transform Information into Insight by John W. Foreman
                                      7. Differential Equations ...
                                        1. Differential Equations For Dummies by Steven Holzner
                                        2. Differential Equations Workbook For Dummies by Steven Holzner
                                      8. Excel Data Analysis For Dummies by Stephen L. Nelson
                                      9. Geometry ...
                                        1. Geometry Essentials For Dummies by Mark Ryan
                                        2. Geometry For Dummies by Mark Ryan
                                        3. Geometry Workbook For Dummies by Mark Ryan
                                        4. Geometry: 1,001 Practice Problems For Dummies (+ Free Online Practice) by Allen Ma
                                      10. How to Solve Word Problems in Algebra by Mildred Johnson
                                      11. Linear Algebra For Dummies by Mary Jane Sterling
                                      12. Manga Guide to ...
                                        1. Manga Guide to Calculus by Hiroyuki Kojima
                                        2. Manga Guide to Linear Algebra by Shin Takahashi
                                        3. Manga Guide to Physics by Hideo Nitta
                                        4. Manga Guide to Regression Analysis Shin Takahashi
                                        5. Manga Guide to Relativity by Hideo Nitta
                                      13. Math Word Problems ...
                                        1. Math Word Problems Demystified by Allan Bluman
                                        2. Math Word Problems For Dummies by Mary Jane Sterling
                                      14. Optimization Modeling with Spreadsheets by Kenneth R. Baker
                                      15. Physics ...
                                        1. Physics I For Dummies by Steven Holzner
                                        2. Physics I Workbook For Dummies by Steven Holzner
                                        3. Physics II For Dummies by Steven Holzner
                                      16. Pre-Calculus ...
                                        1. Pre-Calculus For Dummies by Yang Kuang
                                        2. Pre-Calculus Workbook For Dummies by Yang Kuang
                                        3. Pre-Calculus: 1,001 Practice Problems For Dummies (+ Free Online… by Mary Jane Sterling
                                      17. Predictive Analytics For Dummies by Anasse Bari
                                      18. R For Dummies by Andrie de Vries
                                      19. Schaum's ...
                                        1. Schaum's Outline of Introduction to Probability and Statistics by Seymour Lipschutz
                                        2. Schaum's Outline of Probability by Seymour Lipschutz
                                        3. Schaum's Outline of Probability and Statistics: 760 Solved Problems + 20 Videos by John Schiller
                                        4. Schaum's Outline of Statistics by Murray Spiegel
                                      20. Technical Math For Dummies by Barry Schoenborn
                                      21. Trigonometry ...
                                        1. Trigonometry For Dummies by Mary Jane Sterling
                                        2. Trigonometry Workbook For Dummies by Mary Jane Sterling

                                      Friday, August 7, 2015

                                      Sunday, June 7, 2015

                                      Quantifying the Efficacy of Machine Learning Algorithms

                                      There are many articles and books and talks and so on all making claims about machine learning algorithms. Out of those, how many actually QUANTIFY THE EFFICACY OF THE ALGORITHM?.

                                      Below are some references which will hopefully save people some leg work and help others quantify the performance of their algorithms.
                                      1. Articles
                                      2. Books
                                        1. Assessing and Improving Prediction and Classification by Timothy Masters
                                        2. Evaluating and Comparing the Performance of Machine Learning Algorithms by Melanie Mitchell
                                        3. Evaluating Learning Algorithms: A Classification Perspective by Japkowicz, Shah
                                          1. Amazon
                                            1. Customer Reviews
                                              1. Howard B. Bandy on July 7, 2014
                                                1. ... examples presented are all of stationary data ...
                                            2. Cambridge.Org
                                              1. Looking for an examination copy?
                                                1. If you are interested in the title for your course we can consider offering an examination copy. To register your interest please contact collegesales@cambridge.org providing details of the course you are teaching.
                                              2. Google
                                                1. Has table of contents and you can look at some randomly selected pages
                                                2. MohakShah.Com
                                                  1. Email Address: eval@mohakshah.com
                                                    1. For electronic editions, ...
                                                      1. Computing Resources
                                                  2. Evaluation and Analysis of Supervised Learning Algorithms and Classifiers by Niklas Lavesson
                                                    1. Machine Learning and Data Mining: 14 Evaluation and Credibility by Pier Luca Lanzi

                                                  Sunday, May 24, 2015

                                                  Python Books / Videos on Algorithms and Math

                                                  Below is a list of references on Python that are related to algorithms or mathematics. The hope is to save others some leg work. For a complete list of Python books and videos, click here.
                                                  1. Algorithms & Data Structures in Python by S Jagannathan, N Sinenian
                                                  2. Annotated Algorithms in Python by M Di Pierro
                                                  3. Bayesian Methods for Hackers: Probabilistic Programming and Bayesian Inference by C Davidson-Pilon
                                                  4. Building Machine Learning Systems with Python by W Richert, L Pedro Coelho
                                                  5. Building Probabilistic Graphical Models with Python by K R Karkera
                                                  6. Computational Physics by M Newman
                                                  7. Data Structure and Algorithmic Thinking with Python by N Karumanchi
                                                  8. Data Structures and Algorithms Using Python by R D Necaise
                                                  9. Data Structures and Algorithms: Using Python and C++ by D M Reed, J Zelle
                                                  10. Data Structures and Algorithms in Python by M T Goodrich, R Tamassia, ...
                                                  11. Data Structures and Algorithms with Python by K D Lee, S Hubbard
                                                  12. Doing Math with Python by Amit Saha
                                                  13. Equilibrium Statistical Physics: with Computer simulations in Python by Leonard M. Sander
                                                  14. Image Processing and Acquisition using Python by R Chityala, S Pudipeddi
                                                  15. Introduction to Machine Learning with Python by S Guido
                                                  16. Introduction to Numerical Programming: A Practical Guide for Scientists and Engineers Using Python and C/C++ by T A Beu
                                                  17. Learning scikit-learn: Machine Learning in Python by R Garreta, G Moncecchi
                                                  18. Machine Learning in Python: Essential Techniques for Predictive Analysis by M Bowles
                                                  19. Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python by T W Miller
                                                  20. Mathematics and Python Programming by J C Bautista
                                                  21. Mathematics for the Digital Age and Programming in Python by M Litvin, G Litvin
                                                  22. Modeling Techniques in Predictive Analytics with Python and R by T W Miller
                                                  23. Numerical Methods in Engineering with Python by J Kiusalaas
                                                  24. OpenCV Computer Vision with Python by J Howse
                                                  25. Parallel Programming with Python by J Palach
                                                  26. Primer on Scientific Programming with Python by H P Langtangen
                                                  27. Problem Solving with Algorithms and Data Structures Using Python by B N Miller, D L Ranum
                                                    1. Edition: 2
                                                  28. Programming and Mathematical Thinking: A Gentle Introduction to Discrete Math Featuring Python by A M Stavely
                                                  29. Programming Computer Vision with Python: Tools and algorithms for analyzing images by J E Solem
                                                  30. Python Algorithms: Mastering Basic Algorithms in the Python Language by M L Hetland
                                                  31. Python for Signal Processing: Featuring IPython Notebooks by J Unpingco
                                                  32. Python for Scientists by John M. Stewart
                                                  33. Python Scripting for Computational Science by H P Langtangen
                                                  34. Scientific Computation: Python Hacking for for Math Junkies by B E Shapiro
                                                  35. Statistics, Data Mining, and Machine Learning in Astronomy by Z Ivezic, A Connolly, ...
                                                  36. Think DSP - Digital Signal Processing in Python by A B Downey