Typical machine learning applications

Author

Borja Requena

1 Recap

So far, we have visited the fundamentals of machine learning. We have tackled both regression and classification tasks building the algorithms from scratch. This has allowed us to introduce key machine learning concepts such as loss function, stochastic gradient descent, overfitting or regularization, which are transferrable to any machine learning task and architecture.

Building a polynomial regression from scratch, we have gained intuition about what are the model parameters, how to compute their gradients and how to update them to obtain a better model. Then, with the logistic regression, we have learned the difference between a loss function and a metric. Finally, with the perceptron, we have mastered a fundamental building block of many complex machine learning architectures, as well as developed further intuition about classification tasks in multiple dimensions.

2 Next steps

From now on, we will take a more applied approach. We will use neural networks to tackle more challenging problems than what we have done so far. However, the basic principles remain the same.

In this lecture, we provide an overview of various prototypical machine learning tasks over different kinds of data. This will provide context for the upcomming lessons as well as (hopefully) some motivation! We will focus on three main types of data: images, text and structured data. These give raise to computer vision, natural language processing and tabular data tasks, respectively.

Seat back and enjoy the ride!