githubEdit

fishStage I: Defining and Understanding the Dataset

This is the initial stage of the data cleaning process and entails most of the priori's in the dataset.

In the data cleaning workflow, the user inputs:

  1. Unclean data sample in .csv / .json data type.

  2. Optionally data descriptive tags that describes the domain origin of the dataset e.g. financial, sales, stock market data etc.

It is worthy noting what is truly expected from digesting "unclean" dataset or in other words, what makes a data "unclean"?

"You never know unless you try - even if you are unsure, just pick one and go for it."

Every Maths Professor Go-to Line (aka my dilemma)

The process usually starts by treating each cleaning stage as a kind of hypothesis test—posing a null case and deciding whether to reject or accept it using the right method. Along the way, a few truths about unclean data show up almost every time:

  • Data types are rarely consistent or correct.

  • Missing values creep into most columns.

  • Duplicated rows (and sometimes even whole columns) are hiding in plain sight.

  • Anomalies and outliers are just as common as missing values.

Persistent Data Profiling Examples

Example of uncleaned dataset passed to agent.

Last updated

Was this helpful?