import pandas as pd
Lab 09: Tweets
For Lab 09, you will use airline tweets data to develop a model that will identify sentiment.
Background
Air-travel can be miserable experience. Travelers have a habit of taking to the platform formerly known as Twitter to complain and seek support from customer service. As such, airlines likely employ machine learning models, in addition to customer service representatives, to help efficiently process these communications.
Scenario and Goal
Who are you?
- You are a data scientist working for the social team of a major US airline.
What is your task?
- You are tasked with building a sentiment classifier that will alert customer service representatives to respond to negative tweets about the airline and for positive tweets to be automatically acknowledged. Your goal is to develop a model that accurately classifies tweets as one of negative, neutral, or positive.
Who are you writing for?
- To summarize your work, you will write a report for your manager, who manages the social team. You can assume your manager is very familiar with the platform formerly known as Twitter, and somewhat familiar with the general concepts of machine learning.
Data
To achieve the goal of this lab, we will need previous tweets and their sentiment. The necessary data is provided in the following files:
Source
The data for this lab originally comes from Kaggle.
A sentiment analysis job about the problems of each major U.S. airline. Twitter data was scraped from February of 2015 and contributors were asked to first classify positive, negative, and neutral tweets, followed by categorizing negative reasons (such as “late flight” or “rude service”).
We are providing a modified version of this data for this lab. Modifications include:
- Keeping only the
airline_sentiment
,text
, andairline
variables. - Withholding some data that will be considered the production data.
Data Dictionary
Each observation in the train, test, and (hidden) production data contains information about a particular tweet.
Response
sentiment
[object]
the sentiment of the tweet. One ofnegative
,neutral
, orpositive
.
Features
text
[object]
the full text of the tweet.
Additional Variables
airline
[object]
the airline the tweet was “sent” to.
Data in Python
To load the data in Python, use:
= pd.read_parquet(
tweets_train "https://cs307.org/lab/data/tweets-train.parquet",
)= pd.read_parquet(
tweets_test "https://cs307.org/lab/data/tweets-test.parquet",
)
Prepare Data for Machine Learning
Create the X
and y
variants of the data for use with sklearn
:
# create X and y for train data
= tweets_train["text"]
X_train = tweets_train["sentiment"]
y_train
# create X and y for test data
= tweets_test["text"]
X_test = tweets_test["sentiment"] y_test
Here, we are purposefully excluding the airline
for the creation of models.
You can assume that within the autograder, similar processing is performed on the production data.
Text Processing
To use the text of the tweets as input to machine learning models, you will need to do some preprocessing. The text cannot simply be input into the models we have seen.
X_train
8318 @JetBlue Then en route to the airport the rebo...
3763 @united now you've lost my bags too. At least...
9487 @USAirways Hi, can you attach my AA FF# 94LXA6...
2591 @United, will you fill it? Yes they will. Than...
12887 @AmericanAir thanks! I hope we get movies. Tv'...
...
3416 @united Can i get a refund? I would like to bo...
279 @VirginAmerica what is your policy on flying a...
1814 @united I'm not sure how you can help. Your fl...
29 @VirginAmerica LAX to EWR - Middle seat on a r...
1130 @united Hopefully my baggage fees will be waiv...
Name: text, Length: 8235, dtype: object
To do so, we will create a so-called bag-of-words. Let’s see what that looks like with a small set of strings.
from sklearn.feature_extraction.text import CountVectorizer
import numpy as np
= CountVectorizer() word_counter
= word_counter.fit_transform(
word_counts
["Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo",
"The quick brown fox jumps over the lazy dog",
"",
] ).todense()
print(word_counts)
[[0 8 0 0 0 0 0 0 0]
[1 0 1 1 1 1 1 1 2]
[0 0 0 0 0 0 0 0 0]]
pd.DataFrame(
word_counts,=sorted(list(word_counter.vocabulary_.keys())),
columns )
brown | buffalo | dog | fox | jumps | lazy | over | quick | the | |
---|---|---|---|---|---|---|---|---|---|
0 | 0 | 8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2 |
2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Essentially, we’ve created a number of feature variables, each one counting how many times words in the vocabulary appears in a sample’s text. This is an example of feature engineering.
Let’s find the 100 most common words in the train tweets at the airlines.
= CountVectorizer(max_features=100)
top_100_counter = top_100_counter.fit_transform(X_train)
X_top_100 print("Top 100 Words:")
print(top_100_counter.get_feature_names_out())
print("")
Top 100 Words:
['about' 'after' 'again' 'airline' 'all' 'am' 'americanair' 'amp' 'an'
'and' 'any' 'are' 'as' 'at' 'back' 'bag' 'be' 'been' 'but' 'by' 'call'
'can' 'cancelled' 'co' 'customer' 'delayed' 'do' 'don' 'flight'
'flightled' 'flights' 'for' 'from' 'gate' 'get' 'got' 'had' 'has' 'have'
'help' 'hold' 'hour' 'hours' 'how' 'http' 'if' 'in' 'is' 'it' 'jetblue'
'just' 'late' 'like' 'me' 'my' 'need' 'no' 'not' 'now' 'of' 'on' 'one'
'or' 'our' 'out' 'over' 'phone' 'plane' 'please' 're' 'service' 'so'
'southwestair' 'still' 'thank' 'thanks' 'that' 'the' 'there' 'they'
'this' 'time' 'to' 'today' 'united' 'up' 'us' 'usairways' 've'
'virginamerica' 'was' 'we' 'what' 'when' 'why' 'will' 'with' 'would'
'you' 'your']
= X_top_100.todense()
X_top_100_dense X_top_100_dense
matrix([[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 1, 0],
[0, 0, 0, ..., 0, 1, 0],
...,
[0, 0, 0, ..., 0, 1, 1],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]], shape=(8235, 100))
X_top_100.shape
(8235, 100)
= np.where(top_100_counter.get_feature_names_out() == "plane")
plane_idx = np.sum(X_top_100.todense()[:, plane_idx])
plane_count print('The Word "plane" Appears:', plane_count)
The Word "plane" Appears: 362
Note that you’ll need to do this same process, but within a pipeline! You might also consider looking into other techniques to process text for input to models.
Additional information:
Sample Statistics
Before modeling, be sure to look at the data. Calculate the summary statistics requested on PrairieLearn.
Models
For this lab you will select one model to submit to the autograder. You may use any modeling techniques you’d like, so long as it meets these requirements:
- Your model must start from the given training data, unmodified.
- Importantly, the types and shapes of
X_train
andy_train
should not be changed. - In the autograder, we will call
mod.predict(X_test)
on your model, where your model is loaded asmod
andX_test
has a compatible shape with and the same variable names and types asX_train
. - In the autograder, we will call
mod.predict(X_prod)
on your model, where your model is loaded asmod
andX_prod
has a compatible shape with and the same variable names and types asX_train
. - We assume that you will use a
Pipeline
andGridSearchCV
fromsklearn
as you will need to deal with heterogeneous data, and you should be using cross-validation to tune your model.- More specifically, you should create a
Pipeline
that is fit withGridSearchCV
. Done correctly, this will store a tuned model that you can submit to the autograder.
- More specifically, you should create a
- Importantly, the types and shapes of
- Your model must have a
fit
method. - Your model must have a
predict
method. - Your model must have a
predict_proba
method. - Your model should be created with
scikit-learn
version1.6.1
or newer. - Your model should be serialized with
joblib
version1.4.2
or newer. - Your serialized model must be less than 5MB.
To obtain the maximum points via the autograder, your model must outperform the following metrics:
Test Accuracy: 0.8
Production Accuracy: 0.8
Submission
On Canvas, be sure to submit both your source .ipynb
file and a rendered .html
version of the report.