Skip to content

Commit eebe0b6

Browse files
author
Edoardo Holzl
committed
Reorganize image files
1 parent fbfc571 commit eebe0b6

36 files changed

Lines changed: 28 additions & 28 deletions

_drafts/Benchmark1-scaling.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -101,12 +101,12 @@ The definitions helm chart values can be found [here](https://mlbench.readthedoc
101101

102102
## Results
103103
* Epochs to Top-1 Validation Accuracy
104-
<a href="{{ site.baseurl }}public/images/scaling-epoch-prec1.png" data-lightbox="Run" data-title="Validation Accuracy @ 1">
105-
<img src="{{ site.baseurl }}public/images/scaling-epoch-prec1.png" alt="Validation Accuracy @ 1" style="max-width:80%;"/>
104+
<a href="{{ site.baseurl }}public/images/blog/drafts/scaling-epoch-prec1.png" data-lightbox="Run" data-title="Validation Accuracy @ 1">
105+
<img src="{{ site.baseurl }}public/images/blog/drafts/scaling-epoch-prec1.png" alt="Validation Accuracy @ 1" style="max-width:80%;"/>
106106
</a>
107107
* Time to Top-1 Validation Accuracy
108-
<a href="{{ site.baseurl }}public/images/scaling-time-prec1.png" data-lightbox="Run" data-title="Validation Accuracy @ 1">
109-
<img src="{{ site.baseurl }}public/images/scaling-time-prec1.png" alt="Validation Accuracy @ 1" style="max-width:80%;"/>
108+
<a href="{{ site.baseurl }}public/images/blog/drafts/scaling-time-prec1.png" data-lightbox="Run" data-title="Validation Accuracy @ 1">
109+
<img src="{{ site.baseurl }}public/images/blog/drafts/scaling-time-prec1.png" alt="Validation Accuracy @ 1" style="max-width:80%;"/>
110110
</a>
111111

112112

_drafts/MLBench-benchmarking-mpi-speed.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -17,8 +17,8 @@ In this experiment, we compare MPI P2P communication for
1717
CUDA Inter-Process Communication (IPC) improves communication between GPUs on the same node. In openmpi, one can use `--mca btl_smcuda_use_cuda_ipc` to turn on/off this functionality. We demostrate the influence of CUDA-IPC by sending/receiving a vector on a node with two GPUs.
1818

1919
### Results
20-
<a href="{{ site.baseurl }}public/images/mpi-speed-p2p.png" data-lightbox="Run" data-title="MPI Speed P2P">
21-
<img src="{{ site.baseurl }}public/images/mpi-speed-p2p.png" alt="MPI Speed P2P" style="max-width:80%;"/>
20+
<a href="{{ site.baseurl }}public/images/blog/drafts/mpi-speed-p2p.png" data-lightbox="Run" data-title="MPI Speed P2P">
21+
<img src="{{ site.baseurl }}public/images/blog/drafts/mpi-speed-p2p.png" alt="MPI Speed P2P" style="max-width:80%;"/>
2222
</a>
2323

2424
- The P2P communication between two nodes is bounded by network bandwidth (`7.5 Gbit/s` measured by `iperf`). Communicating large vectors on CPU/GPU have similar throughput.
@@ -41,8 +41,8 @@ The connection between GPUs are PHB which traverss PCIe as well as a PCIe Host B
4141
### Results
4242
The results of experiments are shown below. Note that the bandwidth here is calculated by dividing the vector size by the time it spent. The actual bandwidth depends on the implementation of all reduce.
4343

44-
<a href="{{ site.baseurl }}public/images/mpi-speed-collective.png" data-lightbox="Run" data-title="MPI Speed Collective">
45-
<img src="{{ site.baseurl }}public/images/mpi-speed-collective.png" alt="MPI Speed Collective" style="max-width:80%;"/>
44+
<a href="{{ site.baseurl }}public/images/blog/drafts/mpi-speed-collective.png" data-lightbox="Run" data-title="MPI Speed Collective">
45+
<img src="{{ site.baseurl }}public/images/blog/drafts/mpi-speed-collective.png" alt="MPI Speed Collective" style="max-width:80%;"/>
4646
</a>
4747

4848
- The NCCL all reduce does not give better performance when the GPU per machine is 1 or 2.

_drafts/MLBench-limits-lie-00.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -15,22 +15,22 @@ First create a cluster of two `n1-standard-4` instances with `limits.cpu=1000m`
1515

1616
### master/worker-0 node
1717

18-
<a href="{{ site.baseurl }}public/images/lie-00-resource-node1.png" data-lightbox="Dashboard_Index" data-title="lie-00-resource-node1">
19-
<img src="{{ site.baseurl }}public/images/lie-00-resource-node1.png" alt="lie-00-resource-node1" style="max-width:100%;"/>
18+
<a href="{{ site.baseurl }}public/images/blog/drafts/lie-00-resource-node1.png" data-lightbox="Dashboard_Index" data-title="lie-00-resource-node1">
19+
<img src="{{ site.baseurl }}public/images/blog/drafts/lie-00-resource-node1.png" alt="lie-00-resource-node1" style="max-width:100%;"/>
2020
</a>
2121

2222
Only 2 pods are for mlbench: `release1-mlbench-master-6448bfb454-sxm2l` (`100m` CPU) and `release1-mlbench-worker-0` (`1000m` CPU). The rest of pods request `1161m` of CPU and `750MB` memory.
2323

2424
The summary of resources on this node is (requests `2261m` CPU in total )
2525

26-
<a href="{{ site.baseurl }}public/images/lie-00-resource-node1-summary.png" data-lightbox="Dashboard_Index" data-title="The MLBench Dashboard">
27-
<img src="{{ site.baseurl }}public/images/lie-00-resource-node1-summary.png" alt="The MLBench Dashboard" style="max-width:100%;"/>
26+
<a href="{{ site.baseurl }}public/images/blog/drafts/lie-00-resource-node1-summary.png" data-lightbox="Dashboard_Index" data-title="The MLBench Dashboard">
27+
<img src="{{ site.baseurl }}public/images/blog/drafts/lie-00-resource-node1-summary.png" alt="The MLBench Dashboard" style="max-width:100%;"/>
2828
</a>
2929

3030
### worker-1 node
3131
On worker-1 node, there are much less pods.
32-
<a href="{{ site.baseurl }}public/images/lie-00-resource-node2.png" data-lightbox="Dashboard_Index" data-title="lie-00-resource-node2">
33-
<img src="{{ site.baseurl }}public/images/lie-00-resource-node2.png" alt="lie-00-resource-node2" style="max-width:100%;"/>
32+
<a href="{{ site.baseurl }}public/images/blog/drafts/lie-00-resource-node2.png" data-lightbox="Dashboard_Index" data-title="lie-00-resource-node2">
33+
<img src="{{ site.baseurl }}public/images/blog/drafts/lie-00-resource-node2.png" alt="lie-00-resource-node2" style="max-width:100%;"/>
3434
</a>
3535

3636
So the amount of resources available is limited to the master node. In the previous setting we can allocate at most `3920-1161-100=2659m` for each worker.

_posts/2018-09-07-introducing-mlbench.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,8 @@ excerpt_separator: <!--more-->
88
---
99
MLBench is a framework for distributed machine learning. Its purpose is to improve transparency, reproducibility, robustness, and to provide fair performance measures as well as reference implementations, helping adoption of distributed machine learning methods both in industry and in the academic community.
1010

11-
<a href="{{ site.baseurl }}public/images/Dashboard_Index.png" data-lightbox="Dashboard_Index" data-title="The MLBench Dashboard">
12-
<img src="{{ site.baseurl }}public/images/Dashboard_Index.png" alt="The MLBench Dashboard" style="max-width:80%;"/>
11+
<a href="{{ site.baseurl }}public/images/blog/2018-09-07-introducing-mlbench/Dashboard_Index.png" data-lightbox="Dashboard_Index" data-title="The MLBench Dashboard">
12+
<img src="{{ site.baseurl }}public/images/blog/2018-09-07-introducing-mlbench/Dashboard_Index.png" alt="The MLBench Dashboard" style="max-width:80%;"/>
1313
</a>
1414

1515
<!--more-->

_posts/2020-09-08-communication-backend-comparison.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -73,8 +73,8 @@ and can be sped up using distributed training.
7373
#### CPU
7474
In the graph below, we compare the speeds taken to perform an `all reduce` operation between 2, 4 and 8 workers, of `Float16` and `Float32` CPU tensors.
7575

76-
<a href="{{ site.baseurl }}public/images/backends_comparison_by_workers.png" data-lightbox="backends_comparison_by_workers" data-title="Backend performance comparison (CPU tensors)">
77-
<img src="{{ site.baseurl }}public/images/backends_comparison_by_workers.png" alt="Backend performance comparison (CPU tensors)" style="max-width:100%;"/>
76+
<a href="{{ site.baseurl }}public/images/blog/2020-09-08-communication-backend-comparison/backends_comparison_by_workers.png" data-lightbox="backends_comparison_by_workers" data-title="Backend performance comparison (CPU tensors)">
77+
<img src="{{ site.baseurl }}public/images/blog/2020-09-08-communication-backend-comparison/backends_comparison_by_workers.png" alt="Backend performance comparison (CPU tensors)" style="max-width:100%;"/>
7878
</a>
7979

8080
##### Key differences
@@ -88,8 +88,8 @@ In the graph below, we compare the speeds taken to perform an `all reduce` opera
8888

8989
We now compare the speeds for GPU tensors. Here, we have the addition of NCCL in the comparison.
9090

91-
<a href="{{ site.baseurl }}public/images/backends_comparison_by_workers_CUDA.png" data-lightbox="backends_comparison_by_workers" data-title="Backend performance comparison (GPU tensors)">
92-
<img src="{{ site.baseurl }}public/images/backends_comparison_by_workers_CUDA.png" alt="Backend performance comparison (GPU tensors)" style="max-width:100%;"/>
91+
<a href="{{ site.baseurl }}public/images/blog/2020-09-08-communication-backend-comparison/backends_comparison_by_workers_CUDA.png" data-lightbox="backends_comparison_by_workers" data-title="Backend performance comparison (GPU tensors)">
92+
<img src="{{ site.baseurl }}public/images/blog/2020-09-08-communication-backend-comparison/backends_comparison_by_workers_CUDA.png" alt="Backend performance comparison (GPU tensors)" style="max-width:100%;"/>
9393
</a>
9494

9595
##### Key differences

index.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -66,10 +66,10 @@ Check out our <a href="https://mlbench.github.io/blog/">blog</a>!
6666
<h2>Sponsors</h2>
6767

6868
<ul style="list-style-type:none;">
69-
<li><img src="{{ site.baseurl }}public/images/Logo_EPFL.png" alt="EPFL" style="max-width:200px;border-radius:0px;"/></li>
70-
<li><img src="{{ site.baseurl }}public/images/pwc_logo.png" alt="PwC" style="max-width:200px;border-radius:0px;"/></li>
71-
<li><img src="{{ site.baseurl }}public/images/google.png" alt="Google" style="max-width:200px;border-radius:0px;"/></li>
72-
<li><img src="{{ site.baseurl }}public/images/Facebook-Wordmark-Gray.png" alt="Facebook" style="max-width:200px;border-radius:0px;"/></li>
69+
<li><img src="{{ site.baseurl }}public/images/assets/Logo_EPFL.png" alt="EPFL" style="max-width:200px;border-radius:0px;"/></li>
70+
<li><img src="{{ site.baseurl }}public/images/assets/pwc_logo.png" alt="PwC" style="max-width:200px;border-radius:0px;"/></li>
71+
<li><img src="{{ site.baseurl }}public/images/assets/google.png" alt="Google" style="max-width:200px;border-radius:0px;"/></li>
72+
<li><img src="{{ site.baseurl }}public/images/assets/Facebook-Wordmark-Gray.png" alt="Facebook" style="max-width:200px;border-radius:0px;"/></li>
7373
</ul>
7474

7575

public/css/lightbox.css

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ html.lb-disable-scrolling {
7676
width: 32px;
7777
height: 32px;
7878
margin: 0 auto;
79-
background: url(../images/loading.gif) no-repeat;
79+
background: url(../images/assets/loading.gif) no-repeat;
8080
}
8181

8282
.lb-nav {
@@ -107,7 +107,7 @@ html.lb-disable-scrolling {
107107
width: 34%;
108108
left: 0;
109109
float: left;
110-
background: url(../images/prev.png) left 48% no-repeat;
110+
background: url(../images/assets/prev.png) left 48% no-repeat;
111111
filter: progid:DXImageTransform.Microsoft.Alpha(Opacity=0);
112112
opacity: 0;
113113
-webkit-transition: opacity 0.6s;
@@ -125,7 +125,7 @@ html.lb-disable-scrolling {
125125
width: 64%;
126126
right: 0;
127127
float: right;
128-
background: url(../images/next.png) right 48% no-repeat;
128+
background: url(../images/assets/next.png) right 48% no-repeat;
129129
filter: progid:DXImageTransform.Microsoft.Alpha(Opacity=0);
130130
opacity: 0;
131131
-webkit-transition: opacity 0.6s;
@@ -189,7 +189,7 @@ html.lb-disable-scrolling {
189189
float: right;
190190
width: 30px;
191191
height: 30px;
192-
background: url(../images/close.png) top right no-repeat;
192+
background: url(../images/assets/close.png) top right no-repeat;
193193
text-align: right;
194194
outline: none;
195195
filter: progid:DXImageTransform.Microsoft.Alpha(Opacity=70);

public/images/Create_Run.png

-55.2 KB
Binary file not shown.

public/images/New_Run.png

-13.2 KB
Binary file not shown.

public/images/Pytorch_New_Run.png

-58.8 KB
Binary file not shown.

0 commit comments

Comments
 (0)