当前位置:首页>学术报告
Last-iterate convergence theory of distributed momentum-based stochastic gradient descent algorithms

 
Speaker:

程涤非 博士,中国科学院自动化研究所

Inviter: 张波 研究员
Title:
Last-iterate convergence theory of distributed momentum-based stochastic gradient descent algorithms
Time & Venue:

2024.4.28 15:30 N613

Abstract:

Adding momentum to SGD algorithm—an improvement known as momentum-based SGD (mSGD) algorithm—is one of the most important techniques for accelerating convergence, but the practical applications encounter several challenges. Classical mSGD algorithm, are designed for an architecture in which a central server collects massive amounts of data from different edge devices and performs the optimization. However, this architecture can lead to data privacy concerns for the local edge devices, as well as communication overhead issues caused by the continuous transmission of large volumes of raw data. As a result, many related distributed algorithms have been proposed, but there has been little corresponding convergence theory research about distributed mSGD algorithm. This talk concerns the last-iterate convergence theory for a class of distributed mSGD algorithms, with a decaying learning rate {εn}n≥0 and based on the joint work with Ruinan Jin, Bo Zhang and Hong Qiao.

Affiliation:

 

学术报告中国科学院数学与系统科学研究院应用数学研究所
地址 北京市海淀区中关村东路55号 思源楼6-7层 南楼5-6、8层 邮编:100190 电子邮箱:iam@amss.ac.cn
@2000-2022 京ICP备05058656号-1