October 10, 2024

Tirth's Blog

My technical notes

Understanding and Preprocessing Data

2 min read

Data preprocessing is the first step in any machine learning project. The goal is to organize, clean, and prepare the raw data so that it can be used for training a model. Data often comes with issues like missing values, irrelevant information, or noise, so cleaning the data is essential.

Simple Example: Let’s start with a very basic dataset. Imagine we have student scores for three subjects: Math, Science, and English.

student_scores = {'Math': [95, 85, None, 70], 'Science': [88, 92, 80, 60], 'English': [None, 85, 88, 75]}

In this example, notice that some of the scores are missing (marked as None). The first step is to load this data and identify issues such as missing values:

import pandas as pd
df = pd.DataFrame(student_scores)
print(df)

This will show the table of scores. To deal with missing values, we can either drop them or fill them in with an average value:

df.fillna(df.mean(), inplace=True)
print(df)
Now, the missing values are replaced with the average score from the subject. This process ensures that the data is complete and ready for analysis.

Exoplanet Data: In the exoplanet project, we work with a large dataset containing flux values from stars and exoplanets. We start by loading this data:

train_data = pd.read_csv('exoTrain.csv')
test_data = pd.read_csv('exoTest.csv')

Once the data is loaded, it’s important to check for missing values. We use a heatmap to visualize missing data:

import seaborn as sns
sns.heatmap(train_data.isnull(), cbar=False)

This step helps us identify any missing or corrupt data. If we find missing values, we can decide to drop them or fill them with a specific value, just like we did with the student scores. Cleaning the data is critical before feeding it into a machine learning model.

Copyright © All rights reserved. | Newsphere by AF themes.