Course: Advanced Machine Learning Lecturer: Dr. Sothea HAS
Objective: We have seen in the course that nonparametric models aim at directly estimating the regression function of MSE criterion. In this TP, we shall learn how to implement the three basic nonparametric models including \(K\)-NN, Decision Trees and Kernel Smoother method.
Abalone is a popular seafood in Japanese and European cuisine. However, the age of abalone is determined by cutting the shell through the cone, staining it, and counting the number of Rings through a microscope, a boring and time-consuming task. Other measurements, which are easier to obtain, are used to predict the age, including their physical measurements, weights etc. This section aims at predicting the Rings of abalone using its physical measurements. Read and load the data from kaggle: Abalone dataset.
# %pip install kagglehub # if you have not installed "kagglehub" module yetimport kagglehub# Download latest versionpath = kagglehub.dataset_download("rodolfomendes/abalone-dataset")# Import dataimport pandas as pddata = pd.read_csv(path +"/abalone.csv")data.head()
Sex
Length
Diameter
Height
Whole weight
Shucked weight
Viscera weight
Shell weight
Rings
0
M
0.455
0.365
0.095
0.5140
0.2245
0.1010
0.150
15
1
M
0.350
0.265
0.090
0.2255
0.0995
0.0485
0.070
7
2
F
0.530
0.420
0.135
0.6770
0.2565
0.1415
0.210
9
3
M
0.440
0.365
0.125
0.5160
0.2155
0.1140
0.155
10
4
I
0.330
0.255
0.080
0.2050
0.0895
0.0395
0.055
7
A. Overview of the dataset.
What’s the dimension of this dataset? How many quantitative and qualitative variables are there in this dataset?
Create statistical summary and visualize the distribution of of the quantitative columns and then the qualitative one. Identify and handle what seems to be the problems.
Inspect if there are any dupplicated data.
Study both correlation matrices of this dataset. Comment this correlation matrix.
Is the qualitative column useful for predicting the target Rings?
# To do
B. Model development.
Split the dataset into \(80\%-20\%\) training-testing data using random_state = 42.
Build a \(K\)-NN model and fine-tune it to predict the testing data. Report its CV-RMSE.
Build and fine-tune a Regression Tree to predict the testing data and report its CV-RMSE.
Build a Kernel Smoother method to predict the testing data and report its CV-RMSE (a python module: gradientcobra and its module: KernelSmoother).
Compare the cross-validation performance of the three models, then test all three models on the testing data. Create the following comparison table and conclude.
Model
CV-RMSE
Test-RMSE
Test-\(R^2\)
Test-MAPE
KNN
\(\dots\)
\(\dots\)
\(\dots\)
\(\dots\)
Tree
\(\dots\)
\(\dots\)
\(\dots\)
\(\dots\)
Kernel
\(\dots\)
\(\dots\)
\(\dots\)
\(\dots\)
# To do
2. Revisit Spam dataset
Your task in this section is to create email spam filters by applying the nonparametric models introduced in the course.
Report CV and test performance metrics on the spam dataset loaded below.
Build a pipeline that takes text input as a real email, then return the type of the email using your best spam filter found in the first question.
# Example:email ='Hi Jack,\n I hope this email find you well. I am writing to ask for the address of Marry because I want to send her an invitation for my wedding.\n\n Thank you for the information.\n\n Best regards, Mark'# This is the prediction by KNNprint(f'* KNN predict this email to be: {SpamFilter(email)}')# This is the prediction by Treeprint(f'* Tree predict this email to be: {SpamFilter(email)}')
* KNN predict this email to be: nonspam
* Tree predict this email to be: nonspam