You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: projects.html
+5-5Lines changed: 5 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -51,17 +51,17 @@ <h3> Papers </h3>
51
51
<li> Real-time Artificial Intelligence for Accelerator Control: A Study at the Fermilab Booster, <ahref="https://arxiv.org/abs/2011.07371">arXiv:2011.07371</a>. </li>
52
52
<li> A reconfigurable neural network ASIC for detector front-end data compression at the HL-LHC, <ahref="https://10.1109/TNS.2021.3087100">IEEE Trans. Nucl. Sci. 68, 2179 (2021)</a>. </li>
53
53
<li> Autoencoders on FPGAs for real-time, unsupervised new physics detection at 40 MHz at the Large Hadron Collider, <ahref="https://arxiv.org/abs/2108.03986">arXiv:2108.03986</a>. </li>
<li> hls4ml: An Open-Source Codesign Workflow to Empower Scientific Low-Power Machine Learning Devices, <ahref="https://arxiv.org/abs/2103.05579">arXiv:2103.05579</a>. </li>
58
-
<li> GPU-accelerated machine learning inference as a service for computing in neutrino experiments, <ahref="https://doi.org/10.1038/s42256-021-00356-5">Nat. Mach. Intell. 3, 675 (2021)</a>. </li>
<li> hls4ml: An Open-Source Codesign Workflow to Empower Scientific Low-Power Machine Learning Devices, <ahref="https://arxiv.org/abs/2103.05579">tinyML Reserach Symposium 2021</a>. </li>
58
+
<li> GPU-accelerated machine learning inference as a service for computing in neutrino experiments, <ahref="https://doi.org/10.3389/fdata.2020.604083">Front. Big Data 3, 48 (2021)</a>. </li>
59
59
<li> Distance-Weighted Graph Neural Networks on FPGAs for Real-Time Particle Reconstruction in High Energy Physics, <ahref="https://arxiv.org/abs/2008.03601">arXiv:2008.03601</a>. </li>
60
60
<li> GPU coprocessors as a service for deep learning inference in high energy physics, <ahref="https://arxiv.org/abs/2007.10359">arXiv:2007.10539</a>. </li>
61
61
<li> Automatic heterogeneous quantization of deep neural networks for low-latency inference on the edge for particle detectors, <ahref="https://arxiv.org/abs/2006.10159">arXiv:2006.10159</a>. </li>
62
62
<li> Compressing deep neural networks on FPGAs to binary and ternary precision with hls4ml, <ahref="https://doi.org/10.1088/2632-2153/aba042">Mach. Learn.: Sci. Technol. 2, 015001 (2021)</a>. </li>
63
63
<li> Fast inference of Boosted Decision Trees in FPGAs for particle physics, <ahref="https://doi.org/10.1088/1748-0221/15/05/p05026">JINST 15, P05026 (2020)</a>. </li>
64
-
<li> ESP4ML: Platform-Based Design of Systems-on-Chip for Embedded Machine Learning, <ahref="https://sld.cs.columbia.edu/pubs/giri_date20.pdf">DATE Conference 2020 </a>. </li>
64
+
<li> ESP4ML: Platform-Based Design of Systems-on-Chip for Embedded Machine Learning, <ahref="https://sld.cs.columbia.edu/pubs/giri_date20.pdf">DATE Conference 2020 </a>. </li>
65
65
<li> Accelerated Machine Learning as a Service for Particle Physics Computing, <ahref="https://ml4physicalsciences.github.io/files/NeurIPS_ML4PS_2019_64.pdf">NeurIPS ML4PS Workshop 2019</a>. </li>
<li> Fast inference of deep neural networks in FPGAs for particle physics, <ahref="https://doi.org/10.1088/1748-0221/13/07/P07027">JINST 13, P07027 (2018)</a>. </li>
0 commit comments