Transformers and Attention Mechanisms - Pre Quiz - Attempt Review
Transformers and Attention Mechanisms - Pre Quiz - Attempt Review
Transformers and Attention Mechanisms - Pre Quiz - Attempt Review
Dashboard / Primer 2.0 - App Dev / Stage 1 / Gen AI / Transformers and Attention Mechanisms
Quiz review
Started on Wednesday, 10 April 2024, 10:35 PM
State Finished
Completed on Wednesday, 10 April 2024, 10:37 PM
Time taken 1 min 59 secs
Marks 10.00/10.00
Grade 100.00 out of 100.00
Question 1
Correct
What is the primary component of the Transformer architecture that helps it handle sequences?
CNN
None of the options given
LSTM
RNN
Attention Mechanism
49241
Question 2
Correct
The Transformer architecture introduced the concept of self-attention to handle which primary challenge in sequence modeling?
49241
The correct answer is: Capturing dependencies regardless of their distance in the input
https://accenturelearning.tekstac.com/mod/quiz/review.php?attempt=1799990&cmid=3691 1/5
4/10/24, 10:37 PM Transformers and Attention Mechanisms - Pre Quiz: Attempt review
Question 3
Correct
What is the primary advantage of pretraining a Transformer on a large corpus before fine-tuning on a specific task?
The correct answer is: It allows the model to leverage general language understanding
49241
Question 4
Correct
What is the first step in training a Transformer model for a specific task?
Pre-training 49241
Backpropagation
Fine-tuning
None of the options given
Initialization
49241
https://accenturelearning.tekstac.com/mod/quiz/review.php?attempt=1799990&cmid=3691 2/5
4/10/24, 10:37 PM Transformers and Attention Mechanisms - Pre Quiz: Attempt review
Question 5
Correct
Text summarization
Speech recognition
Image generation using DALL·E
Named entity recognition
Sequence alignment
Question 6
Correct
In the context of Transformers for language translation, what does the encoder primarily focus on?
The correct answer is: Processing and representing the source language
49241
https://accenturelearning.tekstac.com/mod/quiz/review.php?attempt=1799990&cmid=3691 3/5
4/10/24, 10:37 PM Transformers and Attention Mechanisms - Pre Quiz: Attempt review
Question 7
Correct
Image GPT
T5
GPT
BERT
DALL·E
Question 8
Correct
The correct answer is: Capturing different types of information from the input
49241
https://accenturelearning.tekstac.com/mod/quiz/review.php?attempt=1799990&cmid=3691 4/5
4/10/24, 10:37 PM Transformers and Attention Mechanisms - Pre Quiz: Attempt review
Question 9
Correct
The correct answer is: It allows the model to focus on relevant parts of the input when producing an output
49241
Question 10
Correct
Jump to...
https://accenturelearning.tekstac.com/mod/quiz/review.php?attempt=1799990&cmid=3691 5/5