MATLAB Packages I have written for this course
  • Subgradient Solver: SSMS - Implementing subgradient methods for convex optimization problems.  Status: Useable, Modification Phase.
  • Decomposition Methods: DCM.     Status: Not-useable



CCO-10/11 is divided into 3 sections:
  • Section-1:
    Understanding Mathematical Optimization & Different Methods: Presentations in Convex & Combinatorial Optimization; the set of presentations is a sequel. The ideas of the former presentations will be carried out in later presentations. Most of the presentations with applications will contain a MATLAB Code which may or may not utilize the section-2. The presentations with only theoretical discussions will form the background knowledge in Convex & Combinatorial Optimization.

  • Section-2:
    Solvers: General Mathematical Optimization solvers, such as gradient descent, conjugate gradient descent methods etc. This section may or may not follow a sequence between the presentations. This section is suitable for those who already know all the theoretical details and just need to furnish their knowledge in theory and in practically applying these methods.

  • Section-3:
    Continuous Relaxation and Lower Bound Computation: Highly advanced topics invloving continuous relaxation and lower bound computation. Each presentation will be independent from the other, and may utilize section-1 & 2 and may assume complete knowledge in Convex Optimization and basic concepts or Combinatorial Optimization.

  • Section-4: Publications
    To be explained:
    - Subgradient Methods in Network Resource Allocation - Angelia Nedich & Asuman Ozdalgar.
    - Approximate Primal Solutions and Rate Analysis for Dual Subgradient Methods - Angelia Nedich & Asuman Ozdalgar.
    - Subgradient Methods for Saddle Point Problems - Angelia Nedich & Asuman Ozdalgar.


Publications of following people will be emphasized for illustration
  • Stephen Boyd: Stanford Univ, EE, Convex Optimization & Decomposition methods.
  • Asuman Ozdalgar: Massachusetts Institute of Technology, CS, Decomposition Methods (mainly Dual) and Convex Optimization.
  • Angelia Nedich: UIUC, IESE, Outstanding papaers in Distributed Convex Optimization.
  • Jason C. Derenick - Phd Dissertation - A Convex Optimization Framework for Multi-Agent Motion Planning - Very important if you are following CFMAS:SP-11 along with CCO-10/11.

Following publications are quite interesting and practical: (basic knowledge is in subgradient methods is only prerequisite for uderstanding these publications)

  1. Sugradient Methods in Network Resuorce Allocation: Rate Analysis. (SMNRA)

  1. Approximate Primal Solutions and Rate Analysis for Dual Subgradient Methods.
    - Each proposition which is mentioned without proof in SMNRA, is derived and explained in detail in this publication.
    - I have implemented these papers in the SSMS package. So the bounds metioned in the publications can be checked (esp. bound on constraint violation).

  2. Subgradient Methods for Saddle Point Problems.
    - Various bounds are mentioned (some of which follows directly from SMNRA).
    - However the Proposition 2 provides new bounds.
    - A new projection method for Dual Iterates is provided in the paper along with a new bound on the amount of constraint violation.
    - The new thing is that the Dual set changes at iteration of the algorithm, and the Dual set is found by minimizing the upper bound on the primal objective function.


Following books will be good to read along with the course, and referenced in the lecture wherever necessary
  • Convex Optimization: Stephen Boyd and Lieven Vandenberghe,
  • Numerical Optimization: J. Nocedal and S. J. Wright
  • Combinatorial Optimization, 4th Ed., Bernhard Korte and Jens Vygen (Warning! too complex to understand and read; lecture will refer it whenever it is used)


PRESENTATIONS:

Section - 1:
Searching-Eye
CCO-10/11: To cover the background in Mathematical Optimization for the presentations/projects in ARL-10/11 and MLR.
  • The videos herein discuss convex sets, convex functions, convex optimization problems and algorithms.
  • All the videos have a MATLAB code (wherever applicable).
  • It is assumed that the viewers have basic background in linear algebra.
Mathematical Optimization: Convex and Integer Programming





General Mathematical Optimization: Download-pdf-presentation.
Reference Reading: Ch-01 Boyd and Vandenberghe. MATLAB Code: Not Applicabpe (N/A)
Short Notes: Mathematical optimization problem involves minimizing (or maximizing) a mathematical function with respect to an optimization variable with some constraints on the optimization variable. There are several classes of optimization problems, for example Linear Programming, Least-Squares, Convex Programming and General Nonlinear programming
programming problems. Linear programs and Least-Squares problems can be solved reliably and efficiently. Convex Programming problems involves Linear Programs and Least-Squares problems as a special case. Convex Programs can be solved efficiently by methods such as interior point methods, subgradient, cutting plane and ellipsoid methods. Solving a general Nonlinear Programming (not linear and also not known to be convex) problem is a challenging task as no specific approach is known. Therefore a general way to solve these problems is to find the local optimal solution. Convex Optimization plays role in Nonlinear Programming Problems through finding the lower bounds. Several relaxation methods involve replacing the nonlinear constraints with the loose convex constraints to find the lower bound on the NLP problems.

Section - 2:
Unconstrained Minimization: Theoretical Analysis of Stopping Criterion & Condition Number; Download - PDF-presentation.
Reference Reading: Ch-09 Boyd and Vandenberghe. MATLAB Code: Not Applicabpe (N/A)
Short Notes: Unconstrained minimization, is minimizing a function {f(x)} without any constraints. The only constraints are the implicit constraints on the domain of the function to be minimized. These problems are solved through methods such as Descent Methods, Steepest Descent, Newtons Algorithm, Interior Point Methods etc. The line search algorithms (?f(x)).
algorithms involve an iterative solver. Therefore we need certain criterion for stopping the algorithm and also we need to predict the performance of the algorithm. These iterative solvers depend on the Condition Number of the Hessian of the objective ($\omega$($\nabla$f(x))) near the optimal point and the rate of convergence depends on the eccentricity of the condition number. Moreover, these algorithms use the norm of the gradient of the objective function as a stopping criterion. This presentation provides a complete analysis of the derivation of the stopping criterion and discusses the condition number of $\alpha$-sublevel sets and relation between the $\omega$($\nabla$f(x)).
Short Notes: Methods for solving the unconstrained minimization problems include Descent Algorithms. These are iterative solvers which iterates between the two steps to find the solution. Step-1 invloves finding a search direction ($\delta$x) and step-2 involves finding a step-length t to move in the direction $\delta$x. The step-length is found using the line search methods.
line search methods. There are two kinds of line search algorithms, the exact & inexact line search. Exact line search finds the step lenght for which the function f(x+t$\delta$x) is minimized i.e. t=argmin_{(s>0)}f(x+s$\delta$x). The inexact line search just finds the step length such that the function f(x+t$\delta$x) is approximately minimized. The most popular one is the backtracking line search algorithm which depends on two constants, $\alpha$ & $\bdta$ . Gradient descent is an algorithm which uses the $\deltax=\nabla f(x)$, i.e. the direction is the direction of maximum decrease of f(x). It uses the line search (exact or backtracking) to find the step-length to move in this direction. The performance of the algorithm is dependent on the sublevel sets of f(x) near optimum. This presentation shows how to practically apply and solve the Backtracking Line Search Algorithm.
Short Notes: Gradient Descent is an iterative algorithm for solving unconstrained minimization problems. At each iteration, it a descent direction and a step-length to move in the descent direction. Step-1 invloves finding a descent direction ($\delta$x), where $\delta$x=-$\nabla$f(x) for the Gradient Descent and step-2 involves finding a step-length t to move in the direction $\nabla$x.
to move in the direction $\nabla$x. Exact line-search and backtracking line-search may be used to find the step length. The rate of convergence for the Gradient Descent method depends on the eccentricity of the sublevel sets, which is also the condition number of the Hessian of the function at the optimum i.e. $\omega$($\nabla$f(x*)). This presentation discusses the convergence analysis and condition number dependence of the rate of convergence, for the Gradient Descent methods with Line-Search as the step-length finding method.
Steepest Descent is one of the algorithms for solving Unconstrained Minimization problems. It is an iterative algorithm, where at each iteration, algorithm finds a descent direction, which is a steepest descent (under some norm) direction, with lenth of the descent vector constrained by some valid norm (||.||). Different norms, for constraining the length of the descent direction,
length of the descent direction, results in different descent algorithms. The L2-Norm results in Gradient Descent Algorithm, the Quadratic Norm (||z||p=(z&prima;Pz)) results in an algorithm equivalent to Gradient Descent Algorithm after the change of coordinate system (y=Pz). This may be helpful, if after the change of coordiantes, the eccentricity (or condition number of Hessian, $\omega$($\nabla4f(x))) of the sublevel sets is moderate. If L1 is used to constraint the length, ||v||_{1}<=1, then it results in the coordinate descent algorithm. All these algorithms can be derived by solving an equivalent constrained optimization problem for normalized steepest descent direction: $\Delta x_{nsd}$=argminv{$\nabla$f(x)'v | ||v||<=1} instead of solving the exact problem $\Delta x_{nsd}=argminv_v{ \nabla f(x)'v| ||v||<=1}$. Adding the constraint ||v||<=1 simplifies the derivation. The convergence of the steepest descent under any general norm is at least linear. This lecture, derives each of the algorithm for L1-Norm, L2-Norm and the Quadratic Norm and performs the Convergence analysis of Steepest Descent Methods under any general norm.
Unconstrained Minimization: Newton Method Using Backtracking Line Search; Video Lecture: Coming Soon; Download - PDF-presentation.
Reference Reading: Ch-09 Boyd and Vandenberghe. MATLAB Code: seye_newton_method_sanjeev.zip

Section - 3:
Part - 1: Mixed-Integer Programming
  • An Introduction to Mixed Integer Linear Programs: Time To Warmup. Video Lecture - Coming Soon. PDF-Presentation - Coming Soon. MATLAB Code - Applicale.

Part - 2: Relaxations and Reformultions

Part - 3: Applications: Coalition Formation & Path Planning


Section - 4:
Subgradient Methods in Network Resource Allocation: Rate Analysis - Video Lecture - Coming Soon. MATLAB Code: Download the SSMS Package