You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _posts/2020-10-02-nlp-translation.md
+33-28Lines changed: 33 additions & 28 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -142,16 +142,17 @@ Speedups are computed with respect to the 1 worker case, and are intended to ill
142
142
143
143
The graphs below show the time speedups for the LSTM model and Transformer model (respectively).
144
144
145
-
<ahref="{{ site.baseurl }}public/images/blog/2020-10-02-nlp-translation/task4a_speedup.png"data-lightbox="task4a_speedups"data-title="Speedups for GNMT">
145
+
<ahref="{{ site.baseurl }}public/images/blog/2020-10-02-nlp-translation/task4a_speedups.png"data-lightbox="task4a_speedups"data-title="Speedups for GNMT">
<ahref="{{ site.baseurl }}public/images/blog/2020-10-02-nlp-translation/task4b_speedup.png"data-lightbox="task4b_speedups"data-title="Speedups for Transformer">
153
+
<ahref="{{ site.baseurl }}public/images/blog/2020-10-02-nlp-translation/task4b_speedups.png"data-lightbox="task4b_speedups"data-title="Speedups for Transformer">
The left graph shows the absolute speed ups with respect to one worker, and the right one omits
@@ -181,45 +182,49 @@ The next figures show the total time spent in each step of training.
181
182
</a>
182
183
183
184
- The top left graph in each figure shows the total training time `total = compute + communication`
184
-
- Computation times are `compute = fwd + bwd + opt`
185
-
- Communication times are precisely measured to take only into account communication of tensors between workers.
185
+
- Computation times are `compute = forward + backward + optimization + loss computation + init + end`
186
+
- Communication are only `aggregation` steps, and are precisely measured to take only into account communication of tensors between workers.
186
187
187
188
As expected, we can see that compute steps take less time as we increase the number of nodes,
188
-
while communication increasingly takes more and more time, following a sub-linear path. Interestingly, the Transformer model's communication times quickly reach a plateau
189
-
after 4 workers, while GNMT's communication times keeps increasing. This effect is probably due to larger values in the shared tensors.
189
+
while communication increasingly takes more and more time, following a sub-linear path. Looking at both graphs,
190
+
we can see that `aggregation` times increase, but slowly, and reach a plateau quite quickly: the time spent communicating to 8 and 16 workers doesn't differ much.
191
+
192
+
Other compute steps follow the same pattern, but inversely: Fast decrease in the beginning, and slowly plateaus. The steps that benefit the most from distribution are
193
+
the backpropagation and computing the loss. This makes sense, as batches get smaller for each machine.
190
194
191
-
Time spent optimizing doesn’t seem to follow the same path, but increases are insignificant (~10 seconds),
192
-
and are due to additional compute steps (averaging tensors, computations related to Mixed precision) when using distribution.
193
195
194
196
### Performance comparison
195
197
196
-
Finally, the following figures show the loss evolution (left), Ratio of communication to total time (center), and a price index (right),
197
-
computed as follows $$ index = \frac{price\_increase}{performance\_increase} $$
198
+
Finally, the following figures show the share of time spent for each step of training. The *Aggregation* step corresponds to the aggregation of weights between the workers,
199
+
and is the only step where communication happens.
198
200
199
201
#### LSTM
200
-
<ahref="{{ site.baseurl }}public/images/blog/2020-10-02-nlp-translation/task4a_loss_ratio_prices.png"data-lightbox="task4a_loss_ratio_prices"data-title="Step times for GNMT">
Communication takes up a huge part of training as we increase distribution: over 70% of the time is spent sending tensors for 16 workers.
207
+
Communication takes up a huge part of training as we increase distribution: around 80% of the time is spent sending tensors for 16 workers !
206
208
This could be made faster by using a more appropriate connectivity between the workers (currently it is at 10GB/s) that can reduce times by a factor of 10 or more.
207
-
An interesting thing to observe is that the curve of cost index first decreases and has a valley before increasing again, which depicts the limits of distribution for this task.
208
-
The price to performance increase seems to be the best for 4 workers, but all indices are lower than 1, meaning the cost compromise is worth it for this task.
209
+
210
+
We can clearly see the limits of the used hardware here: communication quickly becomes the bottleneck, as very large tensors are shared between increasing number of workers.
211
+
Here, *All Reduce* aggregation of gradients is performed before the optimization step, which yields a lot of exchanged messages. It would be interesting to see how the time spent communicating
212
+
tensors can be reduced by using a more advanced aggregation technique (e.g. sharing with neighbors in a pre-defined topology)
209
213
210
214
211
215
#### Transformer
212
-
<ahref="{{ site.baseurl }}public/images/blog/2020-10-02-nlp-translation/task4b_loss_ratio_prices.png"data-lightbox="task4b_loss_ratio_prices"data-title="Step times for Transformer">
Compared to the LSTM model, the communication time ratio is slightly lower, but follows a similar path.
218
-
For 8 workers, LSTM has a communication to total time of 57%, while Transformer 48%.
219
-
For 16 workers, LSTM increases to 75% (31% increase), and Transformer 67% (39% increase).
220
-
However, the price index has a different shape:
221
-
the observed valley is missing, and the indices are decreasing as we add workers. This suggests a very good performance increase, with a lower price increase. The best configuration
222
-
according to this index is with 8 workers, but the 16 worker case still has very impressive advantages.
221
+
Compared to the LSTM model, the communication time ratio follows a similar path. However, as this model does not use LSTM layers, overall time is lower.
222
+
223
+
## Conclusion
224
+
Both models solve an identical task, with almost identical datasets, and similar training algorithm, but use very different models. It is hence interesting to see how both react to distribution.
225
+
These similar results show that both models benefit similarly from multiple workers, and are both very quickly bottlenecked by the communication hardware. Here, nodes communicate over a regular high speed
226
+
network, which mimics a real "distributed" training environment, where machines could be in different locations. Using direct, or higher performance communication between the nodes (e.g. NVLink, or Google's Virtual NIC)
227
+
we would observe speedups close to the compute speedups, so close to linear speedups for both models.
0 commit comments