Course materials

Lectures

  • Lecture 1: Introduction, -- Jan. 16, 2019, 10:00 - 12:00

<slides> , <video>

  • Lecture 2: Centralized Convex ML (part 1) -- Jan. 23, 2019, 10:00-12:00

<slides> , <video>

  • Lecture 3: Centralized Convex ML (part 2) -- Jan. 30, 2019, 10:00-12:00

<slides> , <video>

  • Lecture 4: Centralized Nonconvex ML -- Feb. 6, 2019, 13:00-15:00

<slides> , <video>

  • Lecture 5: Distributed ML -- Feb 13, 2019, 10:00-12:00

<slides> , <video>

  • Lecture 6: ADMM, guest lecturer Dr. Euhanna Ghadimi -- Feb 20, 2019, 10:00-12:00

<slides> , <video> , <Matlab codes for SVM>

  • Lecture 7: Communication Efficiency -- Feb 27, 2019, 10:00 - 12:00

<slides> , <video>

  • Lecture 8: Deep Neural Networks -- Mar 6, 2019, 10:00 - 12:00

<slides> , <video>

  • Lecture 9: Special Topic 1: MLoNs with partial knowledge
  • Lecture 10: Special Topic 2: Application areas: Dual methods and decomposition algorithms in the networks
  • Lecture 11: Special Topic 3: Large-scale MLoNs
  • Lecture 12: Special Topic 4: Online MLoNs
  • Lecture 13: Special Topic 5: Security in MLoNs: Using ML to solve network security problem
  • Lecture 14: Federated learning and privacy-preserving distributed MLoNs

Location for Lectures 1-8: Q2, Malvinas vag 10, KTH Main Campus


Location for Lectures 9-13: V32 and L52, KTH Main Campus

All lectures

Download all the lecture slides from here

Watch all the lectures in here

Assignments

  • All CAs are available here
  • HW1-3 are up! Check them in Lectures. Deadline for each HW is 7 days.
  • CAs 1-3 are up. Deadline for each CA is 14 days.

Readings

  • Bubeck, Sébastien. "Convex optimization: Algorithms and complexity." Foundations and Trends in Machine Learning, vol. 8, no.3-4 (2015): 231-357.
  • L. Bottou, F. Curtis, J. Norcedal, “Optimization Methods for Large-Scale Machine Learning”, SIAM Rev., 60(2), 223–311.
  • Boyd, Stephen, et al. "Distributed optimization and statistical learning via the alternating direction method of multipliers." Foundations and Trends® in Machine learning 3.1 (2011): 1-122.
  • Jordan, Michael I., Jason D. Lee, and Yun Yang. "Communication-efficient distributed statistical inference," Journal of the American Statistical Association, 2018.
  • Smith, Virginia, et al. "CoCoA: A general framework for communication-efficient distributed optimization." Journal of Machine Learning Research 18 (2018): 230.
  • Alistarh, Dan, et al. "QSGD: Communication-efficient SGD via gradient quantization and encoding." Advances in Neural Information Processing Systems. 2017.
  • Schmidt, Mark, Nicolas Le Roux, and Francis Bach. "Minimizing finite sums with the stochastic average gradient." Mathematical Programming 162.1-2 (2017): 83-112.
  • Boyd, Stephen, et al. "Randomized gossip algorithms," IEEE Transactions on Information Theory, 2006.
  • Scaman, Kevin, et al. "Optimal algorithms for smooth and strongly convex distributed optimization in networks," ICML, 2017.
  • Goodfellow, Y. Bengio, A. Courville, “Deep Learning”, MIT press 2016
  • Fast Decentralized Optimization over Networks, https://arxiv.org/pdf/1804.02425.pdf