Lab Policy

Models Meet Data

There will be a total of eleven labs in CS 307. Each lab will consist of two separate but related assignments:

Each lab will involve developing machine learning models for a real world situation using real world data.

Lab Model

The model portion of the lab will consist of two questions on PrairieLearn.

The Summary Statistics question will ask you to calculate several numeric summaries of the training data.

The Model question will autograde a model that you are asked to develop.

Model Submission

To save your models for submission to the autograder, use the dump function from the joblib library. This process of persisting a model to disk is called serialization.

from joblib import dump
dump(model_object, "filename.joblib")

The autograder will only accept a particular filename. That filename will always be provided in the lab instructions. Models submitted to the autograder must be less than 5MB on disk.

There will be a very non-trivial timeout between possible model submissions to the autograder. Do not expect to make multiple submissions near a deadline. More than anything else, the long timeout should encourage you to seriously consider model validation before submission.

Submissions for the summary statistics question will have a more modest timeout.

In general, you will have access to both a train and test set. We will also evaluate your model with additional holdout data, which we will call the production set. You will not have access to the production data.

When you submit a .joblib file containing your model, although it is valid, PrairieLearn will display a message. In red text, it will state: “Content preview is not available for this type of file.” While the red text looks scary, this message is benign. It simply means that PrairieLearn cannot show you a preview of the file. It does not mean that there is an error!

Lab Report

In addition to simply developing models, you will also write a lab report using the IMRAD structure. A template Jupyter notebook is provided.

IMRAD Format

While we require the IMRAD format, that does not imply that you need to write an academic paper. Stick to the template provided and generally try to be concise.1 You are authorized to plagiarize from the lab instructions that describe the lab scenario and associated data.

In general, when writing your report, write as if the lab prompt did not exist, and assume the reader is wholly unfamiliar with CS 307 or the assignment you are completing. They will have some familiarity with the domain of the problem depending on the given background and goal.

Is this a non-trivial amount of “extra” work? Yes. Is it worth it? You betcha!2

Introduction

The introduction section should state the purpose of the report. It should explain the why and the goal of the report. It should very briefly mention what data and models will be used.

Methods

The methods section should describe what you did and how you did it. We will break the methods section into two subsections.

Data

The data section should do three things:

  • Describe the available data
  • Calculate and report any relevant summary statistics
  • Include at least one relevant visualization

To ensure that you have properly described the data, you should include a full data dictionary.

Modeling

The modeling section should describe the modeling procedures that was performed. You should not simply state what each line of your Python code does. Instead, you should describe the modeling as if you were describing it to another person.

This section will also collect the code used to train your models.

Results

The results section should plainly state the results, which will often be test metrics that evaluate the performance of your models.

You must also include one figure in the results (or discussion) section. This figure should help communicate the performance or usability of your chosen model. A figure in this context could be a visualization or a well-formatted table.

Do not report or make any decisions based on the production information reported in the autograder. You are writing this report to largely communicate if you would put your model in production or not. You wouldn’t have production metrics before putting the model in production! The production data is used partially to illustrate “making predictions for new data” but also to prevent cheating to obtain required test metrics.

Discussion

Be sure to state a conclusion, that is, whether or not you would use the model you trained and selected for the real world scenario described at the start of the lab!

Specifically, if you choose to put your model into practice:

  • What benefit does the model provide?
  • What limitations should be considered?

Or, if you choose to not put your model into practice:

  • What risks are avoided by not using the model?
  • What improvements would be necessary to consider the model for usage?

The discussion section is by far the most important, both in general, and for your lab grade. It should be given the most consideration, and is likely (but not required) to be the longest section.

Report Submission

After you complete your lab notebook, we recommend the following steps:

  1. Clear all output.
  2. Restart the Python kernel.
  3. Run all cells.
  4. Preview (render) the notebook with Quarto.3

Note that each of these corresponds to a button in VSCode when editing a Jupyter Notebook. The preview button may require first clicking the three dots to see more options.

The preview (render) step requires Quarto CLI and the Quarto VSCode Extension. Installing these will allow you to render your Jupyter Notebook to as a .html file using Quarto. This has a number of advantages over the using Jupyter’s export feature.

Following these steps will ensure that once you have submitted, we will very, very likely be able to reproduce your work.

Once you’re ready to submit, head to the relevant lab on Canvas. You are required to submit two files:

  1. lab-xx.ipynb
  2. lab-xx.html

Here xx should be the two-digit lab number. For example with Lab 01 you will submit:

  1. lab-01.ipynb
  2. lab-01.html

After submitting to Canvas, please spend an extra minute to double check that your submission was accepted!

Late Submissions

Reports may be submitted late, with a 20% reduction per day.

Report submission will allow for unlimited attempts. However, be aware, the human graders will grade whichever version was most recently submitted at the time they choose to grade, which can be any time after the deadline. Importantly, if you submit one version before the deadline, and another after the deadline, they will grade the late version and you will be assessed a late penalty.

Once a grader has graded a report, you may not submit again, even if there are late days remaining. We do not recommend making a submission you are not willing to have graded.

Grading Rubric

Lab Reports will be graded on Canvas out of a possible 15 points. Each of the 15 points will have it’s own rubric item. Each rubric item will be assigned a possible value of 0, 0.5, or 1 corresponding to:

  • No issues: 1
  • Minor issues: 0.5
  • Major issues: 0

Rubric Items

  1. Is the source .ipynb notebook submitted?
    • Does it have the correct filename?
  2. Is a rendered .html report submitted?
    • Does it have the correct filename?
  3. Is the .html file properly rendered via Quarto?
    • No points will be granted if the file is rendered via Jupyter.
  4. Are both the source notebook and rendered report, including the code contained in them, well-formatted?
    • Is markdown used correctly?
    • Does the markdown render as expected?
    • Does code follow PEP 8? While we do not expect students to be code style experts, there are some very basics we would like you to follow:
      • No blank lines at the start of cells. No more than one blank line at the end of a cell.
      • Spaces around binary operators, except for passing arguments to function parameters.
  5. Does the report have a title?
    • Does the title use (a reasonable variant of) Title Case?
  6. Does the introduction reasonably introduce the scenario?
    • Can a reader unfamiliar with CS 307 and the specific lab understand why a model is being developed?
  7. Does the methods section reasonably describe the data used?
    • Is a data dictionary, describing the target and each feature, included?
  8. Does the methods section reasonably describe model development?
    • Include information on models considered, parameters considered, tuning and selection procedures, and any other methods used during model development.
  9. Is a well-formatted exploratory visualization included in the data subsection of the methods section?
    • Does the visualization provide some useful insight that informs modeling or interpretation?
    • At minimum, a well-formatted visualization should include:
      • A title that uses Title Case.
      • A manually labeled \(x\)-axis using Title Case, including units if necessary.
      • A manually labeled \(y\)-axis using Title Case, including units if necessary.
      • A legend if plotting multiple categories of things.
  10. Does the results section provide a reasonable summary of the selected model’s performance?
  11. Is a well-formatted summary figure included in the results (or discussion) section?
    • This figure can be either a visualization or a well-formatted table.
    • Does the figure provide some insight into the performance or usability of the model?
    • A well-formatted table must be rendered as HTML in the resulting report.
  12. Is a conclusion stated in the discussion section?
    • Specifically, you must explicitly state whether or not you would use the model in practice.
  13. Does the conclusion have a reasonable justification?
    • Does the conclusion and justification consider the lab scenario?
    • Answer as if you job depends on it. In the future, that might be the case!
    • Using a single numeric metric is wholly insufficient, most importantly because it lacks context. You should give serious consideration to what errors can be made by your model, and what the consequences of those errors could be.
  14. Are the specifics of the conclusion included in the discussion?
    • Are the benefits and limitations discussed if you choose to use the model?
    • Are the risks and improvements discussed if you choose to not use the model?
  15. Throughout the discussion section, are course concepts used correctly and appropriately?

Footnotes

  1. You are not Charles Dickens and we are not paying you by the word.↩︎

  2. This is Midwestern for “yes” but enthusiastically.↩︎

  3. Importantly, this is not the export that Jupyter uses by default↩︎