Hora: Desde 10:00h a 17:00h
Lugar: Blue Lecture Room
PLUS+ TRAINING PROGRAM: Crash course in deep learning - Advanced module by Prof. Volpe & Prof. Manzo
The course will be taught by Prof. Giovanni Volpe | Full Professor at University of Gothenburg & Prof. Carlo Manzo | Associate Professor at the Universitat de Vic.
The advanced course is will cover the module Recurrent Neural Networks for Timeseries Analysis, Attention, and Transformers for Sequence Processing
We will explore advanced neural network architectures for sequential data, focusing on Recurrent Neural Networks (RNNs), Attention mechanisms, and Transformers. We will begin by implementing RNNs and applying them to tasks like temperature forecasting and language translation. Next, we will introduce the concept of attention, emphasizing its role in improving machine translation by focusing on relevant input elements.
Finally, we will explore Transformers, a ground-breaking architecture that uses self-attention to achieve superior performance in tasks such as text translation, sentiment analysis, and image processing with Vision Transformers (ViT). The class will showcase practical applications of these models to diverse real-world problems.
If you are interested in participating, please REGISTER by January 17.
Hora: Desde 10:00h a 17:00h
Lugar: Blue Lecture Room
PLUS+ TRAINING PROGRAM: Crash course in deep learning - Advanced module by Prof. Volpe & Prof. Manzo
The course will be taught by Prof. Giovanni Volpe | Full Professor at University of Gothenburg & Prof. Carlo Manzo | Associate Professor at the Universitat de Vic.
The advanced course is will cover the module Recurrent Neural Networks for Timeseries Analysis, Attention, and Transformers for Sequence Processing
We will explore advanced neural network architectures for sequential data, focusing on Recurrent Neural Networks (RNNs), Attention mechanisms, and Transformers. We will begin by implementing RNNs and applying them to tasks like temperature forecasting and language translation. Next, we will introduce the concept of attention, emphasizing its role in improving machine translation by focusing on relevant input elements.
Finally, we will explore Transformers, a ground-breaking architecture that uses self-attention to achieve superior performance in tasks such as text translation, sentiment analysis, and image processing with Vision Transformers (ViT). The class will showcase practical applications of these models to diverse real-world problems.
If you are interested in participating, please REGISTER by January 17.