DCL Seminar Series: Mihailo Jovanovic "The proximal augmented Lagrangian method for nonsmooth composite optimization"

Event Type
Seminar/Symposium
Sponsor
Decision and Control Laboratory
Location
B02 CSL Auditorium
Date
October 24, 2018 3:00 PM - 4:00 PM
Speaker
Mihailo Jovanovic, University of Southern California
Cost
Registration
Contact
Angie Ellis
Email
amellis@illinois.edu
Phone
217-493-8959

Decision and Control Laboratory Lecture Series

Coordinated Science Laboratory

 

The proximal augmented Lagrangian method for nonsmooth composite optimization


Mihailo R. Jovanovic

Professor of Ming Hsieh Department of Electrical Engineering

Founding Director of Center for Systems and Control

University of Southern California

 

Wednesday, October 24, 2018

3:00 p.m. to 4:00 p.m.

CSL Auditorium (B02)

Title:  The proximal augmented Lagrangian method for nonsmooth composite optimization

Abstract:

Several problems in the design of networks can be formulated as non-smooth composite optimization problems in which the objective function is given by the sum of a differentiable term and a non-differentiable regularizer. For example, the edge addition to improve network performance, redesign of the edge weights to optimize certain performance metric, and selection of important nodes in a network with a given topology fit into this category. In this talk, I will describe a primal-dual method based on the proximal augmented Lagrangian for solving this class of non-smooth optimization problems. After introducing an auxiliary variable, we utilize the proximal operator of the nonsmooth regularizer to transform the associated augmented Lagrangian into a function that is once, but not twice, continuously differentiable. Saddle points of this function, which we call proximal augmented Lagrangian, correspond to the solution of the original optimization problem. This function is used to develop customized algorithms based on the first and second order primal-dual methods. When the differentiable component of the objective function is strongly convex with a Lipschitz continuous gradient, we employ the theory of integral quadratic constraints to prove exponential convergence of the primal-descent dual-ascent gradient method. We also use a generalization of the Hessian to define second order updates on this function and prove global exponential stability of the corresponding differential inclusion. We close the talk by discussing the classes of problems that are amenable to distributed optimization and compare performance of the developed method to the state-of-the-art alternatives.

Bio:

Mihailo R. Jovanovic is a professor in the Ming Hsieh Department of Electrical Engineering and the founding director of the Center for Systems and Control at the University of Southern California. He was a faculty in the Department of Electrical and Computer Engineering at the University of Minnesota, Minneapolis, from December 2004 until January 2017, and has held visiting positions with Stanford University and the Institute for Mathematics and its Applications. His current research focuses on the design of controller architectures, dynamics and control of fluid flows, and fundamental limitations in the control of large networks of dynamical systems. He serves as an Associate Editor of the IEEE Transactions on Control of Network Systems, and had served as the Chair of the APS External Affairs Committee, and as an Associate Editor of the SIAM Journal on Control and Optimization and of the IEEE Control Systems Society Conference Editorial Board. Prof. Jovanovic is a fellow of APS and a senior member of IEEE. He received a CAREER Award from the National Science Foundation in 2007, the George S. Axelby Outstanding Paper Award from the IEEE Control Systems Society in 2013, and the Distinguished Alumni Award from UC Santa Barbara in 2014.