Introduction to Distributed Computing (Workshops @ Kempner)
Join us for an interactive workshop on distributed computing as part of the Workshops @ Kempner series! We’ll explore the types of problems it can address and key forms of communication between computers. This workshop will focus on two widely-used forms of distributed computing: embarrassingly parallel processes for tasks like hyperparameter sweeps, and Distributed Data Parallel (DDP) processes, which facilitate training machine learning models across multiple GPUs.
Date: Tuesday October 10th
Time: 10 am – 1 pm
Location: SEC Building, Room 6.242 (Kempner Large Conference Room)
Who can attend this workshop?
Open to the Kempner Institute community. Harvard affiliates may join given availability.
What will attendees learn from this workshop?
- A clear understanding of distributed computing, different forms of communication between computers, and how to determine when distributed computing could be helpful for your research
- How to use array jobs for hyperparameter sweeps, an example of embarrassingly parallel processes
- How to use Distributed Data Parallel (DDP) when training multi-layer perceptrons in PyTorch
Prerequisites:
- Basic knowledge of SLURM
- Familiarity with multi-layer perceptrons
- Knowledge of Pytorch and backpropagation is helpful but not required
Registration:
Please register as soon as possible here. Space is limited, and registration will be on a first-come, first-served basis.
Contact Information:
For any questions about the workshop, please contact kempnereducation@harvard.edu