from joblib import dump
"filename.joblib") dump(model_object,
Lab Policy
Models Meet Data
There will be a total of eleven labs in CS 307. Each lab will consist of two separate but related assignments:
- Lab Model: You will develop a model and submit it to PrairieLearn.
- Lab Report: You will write a report and submit it to Canvas.
Each lab will involve developing machine learning models for a real world situation using real world data.
- The model you submit will be graded based on its performance.
- The report you submit will be graded based on its ability to communicate the purpose, performance, and usability of the model.
Lab Model
The model portion of the lab will consist of two questions on PrairieLearn.
The Summary Statistics question will ask you to calculate several numeric summaries of the training data.
The Model question will autograde a model that you are asked to develop.
Model Submission
To save your models for submission to the autograder, use the dump
function from the joblib
library. This process of persisting a model to disk is called serialization.
The autograder will only accept a particular filename. That filename will always be provided in the lab instructions. Models submitted to the autograder must be less than 5MB on disk.
In general, you will have access to both a train and test set. We will also evaluate your model with additional holdout data, which we will call the production set. You will not have access to the production data.
Lab Report
In addition to simply developing models, you will also write a lab report using the IMRAD structure. A template Jupyter notebook is provided.
IMRAD Format
While we require the IMRAD format, that does not imply that you need to write an academic paper. Stick to the template provided and generally try to be concise.1 You are authorized to plagiarize from the lab instructions that describe the lab scenario and associated data.
Introduction
The introduction section should state the purpose of the report. It should explain the why and the goal of the report. It should very briefly mention what data and models will be used.
Methods
The methods section should describe what you did and how you did it. We will break the methods section into two subsections.
Data
The data section should do three things:
- Describe the available data
- Calculate and report any relevant summary statistics
- Include at least one relevant visualization
To ensure that you have properly described the data, you should include a full data dictionary.
Modeling
The modeling section should describe the modeling procedures that was performed. You should not simply state what each line of your Python code does. Instead, you should describe the modeling as if you were describing it to another person.
This section will also collect the code used to train your models.
Results
The results section should plainly state the results, which will often be test metrics that evaluate the performance of your models.
You must also include one figure in the results (or discussion) section. This figure should help communicate the performance or usability of your chosen model. A figure in this context could be a visualization or a well-formatted table.
Discussion
Be sure to state a conclusion, that is, whether or not you would use the model you trained and selected for the real world scenario described at the start of the lab!
Specifically, if you choose to put your model into practice:
- What benefit does the model provide?
- What limitations should be considered?
Or, if you choose to not put your model into practice:
- What risks are avoided by not using the model?
- What improvements would be necessary to consider the model for usage?
The discussion section is by far the most important, both in general, and for your lab grade. It should be given the most consideration, and is likely (but not required) to be the longest section.
Report Submission
After you complete your lab notebook, we recommend the following steps:
- Clear all output.
- Restart the Python kernel.
- Run all cells.
- Preview (render) the notebook with Quarto.3
Note that each of these corresponds to a button in VSCode when editing a Jupyter Notebook. The preview button may require first clicking the three dots to see more options.
The preview (render) step requires Quarto CLI and the Quarto VSCode Extension. Installing these will allow you to render your Jupyter Notebook to as a .html
file using Quarto. This has a number of advantages over the using Jupyter’s export feature.
Following these steps will ensure that once you have submitted, we will very, very likely be able to reproduce your work.
Once you’re ready to submit, head to the relevant lab on Canvas. You are required to submit two files:
lab-xx.ipynb
lab-xx.html
Here xx
should be the two-digit lab number. For example with Lab 01 you will submit:
lab-01.ipynb
lab-01.html
Late Submissions
Reports may be submitted late, with a 20% reduction per day.
Report submission will allow for unlimited attempts. However, be aware, the human graders will grade whichever version was most recently submitted at the time they choose to grade, which can be any time after the deadline. Importantly, if you submit one version before the deadline, and another after the deadline, they will grade the late version and you will be assessed a late penalty.
Once a grader has graded a report, you may not submit again, even if there are late days remaining. We do not recommend making a submission you are not willing to have graded.
Grading Rubric
Lab Reports will be graded on Canvas out of a possible 15 points. Each of the 15 points will have it’s own rubric item. Each rubric item will be assigned a possible value of 0, 0.5, or 1 corresponding to:
- No issues: 1
- Minor issues: 0.5
- Major issues: 0
Rubric Items
- Is the source
.ipynb
notebook submitted?- Does it have the correct filename?
- Is a rendered
.html
report submitted?- Does it have the correct filename?
- Is the
.html
file properly rendered via Quarto?- No points will be granted if the file is rendered via Jupyter.
- Are both the source notebook and rendered report, including the code contained in them, well-formatted?
- Is markdown used correctly?
- Does the markdown render as expected?
- Does code follow PEP 8? While we do not expect students to be code style experts, there are some very basics we would like you to follow:
- No blank lines at the start of cells. No more than one blank line at the end of a cell.
- Spaces around binary operators, except for passing arguments to function parameters.
- Does the report have a title?
- Does the title use (a reasonable variant of) Title Case?
- Does the introduction reasonably introduce the scenario?
- Can a reader unfamiliar with CS 307 and the specific lab understand why a model is being developed?
- Does the methods section reasonably describe the data used?
- Is a data dictionary, describing the target and each feature, included?
- Does the methods section reasonably describe model development?
- Include information on models considered, parameters considered, tuning and selection procedures, and any other methods used during model development.
- Is a well-formatted exploratory visualization included in the data subsection of the methods section?
- Does the visualization provide some useful insight that informs modeling or interpretation?
- At minimum, a well-formatted visualization should include:
- A title that uses Title Case.
- A manually labeled \(x\)-axis using Title Case, including units if necessary.
- A manually labeled \(y\)-axis using Title Case, including units if necessary.
- A legend if plotting multiple categories of things.
- Does the results section provide a reasonable summary of the selected model’s performance?
- Is a well-formatted summary figure included in the results (or discussion) section?
- This figure can be either a visualization or a well-formatted table.
- Does the figure provide some insight into the performance or usability of the model?
- A well-formatted table must be rendered as HTML in the resulting report.
- Is a conclusion stated in the discussion section?
- Specifically, you must explicitly state whether or not you would use the model in practice.
- Does the conclusion have a reasonable justification?
- Does the conclusion and justification consider the lab scenario?
- Answer as if you job depends on it. In the future, that might be the case!
- Using a single numeric metric is wholly insufficient, most importantly because it lacks context. You should give serious consideration to what errors can be made by your model, and what the consequences of those errors could be.
- Are the specifics of the conclusion included in the discussion?
- Are the benefits and limitations discussed if you choose to use the model?
- Are the risks and improvements discussed if you choose to not use the model?
- Throughout the discussion section, are course concepts used correctly and appropriately?
Footnotes
You are not Charles Dickens and we are not paying you by the word.↩︎
This is Midwestern for “yes” but enthusiastically.↩︎
Importantly, this is not the export that Jupyter uses by default↩︎