3 Core Lego Blocks When Learning AI
These lego blocks show over and over through multiple AI algorithms
You will encounter a considerable learning curve as you start learning AI.
The landscape is far and wide. Most books and articles start with linear regression. While easy to learn and use, I found that some individual building blocks appear in approach after approach.
If you understand these Lego blocks, subsequent reading happens much faster.
Cost or Error Functions are mathematical functions used to measure the difference between a model's predicted values and the actual value of the data. They help assess the model's performance and guide the optimization process to improve accuracy.
Gradient Descent is a core algorithm used to get to an optimized solution. It is an iterative approach that works on getting to the minimum value of the cost function. Rather, it finds the best way to get to the parameters required for the ML algorithm to do its job well.
Hyperparameters are parameters not learned from the data but set before the training begins. They govern the training process and influence the model's performance. A standard set of parameters and tuning approaches appears in algorithm after algorithm.
Thanks for reading Journey through AI: Learning and building intuition in AI! Subscribe for free to receive new posts and support my work.
Subscribed
For anyone in the industry, this stuff is common knowledge._ But to those who haven't lived "a day in the life" in AI, you discover the Lego blocks after reading multiple ML algorithms.
Lock these concepts in as soon as possible, and they will make the rest of the reading easier.

