- Subgradient Solver: SSMS - Implementing subgradient methods for convex optimization problems.
**Status:**Useable, Modification Phase. - Decomposition Methods: DCM. Status: Not-useable

- Section-1:

Understanding Mathematical Optimization & Different Methods: Presentations in Convex & Combinatorial Optimization; the set of presentations is a sequel. The ideas of the former presentations will be carried out in later presentations. Most of the presentations with applications will contain a MATLAB Code which may or may not utilize the section-2. The presentations with only theoretical discussions will form the background knowledge in Convex & Combinatorial Optimization.

- Section-2:

Solvers: General Mathematical Optimization solvers, such as gradient descent, conjugate gradient descent methods etc. This section may or may not follow a sequence between the presentations. This section is suitable for those who already know all the theoretical details and just need to furnish their knowledge in theory and in practically applying these methods.

- Section-3:

Continuous Relaxation and Lower Bound Computation: Highly advanced topics invloving continuous relaxation and lower bound computation. Each presentation will be independent from the other, and may utilize section-1 & 2 and may assume complete knowledge in Convex Optimization and basic concepts or Combinatorial Optimization.

- Section-4: Publications

To be explained:

- Subgradient Methods in Network Resource Allocation - Angelia Nedich & Asuman Ozdalgar.

- Approximate Primal Solutions and Rate Analysis for Dual Subgradient Methods - Angelia Nedich & Asuman Ozdalgar.

- Subgradient Methods for Saddle Point Problems - Angelia Nedich & Asuman Ozdalgar.

- Stephen Boyd: Stanford Univ, EE, Convex Optimization & Decomposition methods.
- Asuman Ozdalgar: Massachusetts Institute of Technology, CS, Decomposition Methods (mainly Dual) and Convex Optimization.
- Angelia Nedich: UIUC, IESE, Outstanding papaers in Distributed Convex Optimization.
- Jason C. Derenick - Phd Dissertation - A Convex Optimization Framework for Multi-Agent Motion Planning - Very important if you are following CFMAS:SP-11 along with CCO-10/11.

- Approximate Primal Solutions and Rate Analysis for Dual Subgradient Methods.

- Each proposition which is mentioned without proof in SMNRA, is derived and explained in detail in this publication.

- I have implemented these papers in the SSMS package. So the bounds metioned in the publications can be checked (esp. bound on constraint violation).

- Subgradient Methods for Saddle Point Problems.

- Various bounds are mentioned (some of which follows directly from SMNRA).

- However the Proposition 2 provides new bounds.

- A new projection method for Dual Iterates is provided in the paper along with a new bound on the amount of constraint violation.

- The new thing is that the Dual set changes at iteration of the algorithm, and the Dual set is found by minimizing the upper bound on the primal objective function.

- Convex Optimization: Stephen Boyd and Lieven Vandenberghe,
- Numerical Optimization: J. Nocedal and S. J. Wright
- Combinatorial Optimization, 4th Ed., Bernhard Korte and Jens Vygen (Warning! too complex to understand and read; lecture will refer it whenever it is used)

- The videos herein discuss convex sets, convex functions, convex optimization problems and algorithms.
- All the videos have a MATLAB code (wherever applicable).
- It is assumed that the viewers have basic background in linear algebra.

programming problems. Linear programs and Least-Squares problems can be solved reliably and efficiently. Convex Programming problems involves Linear Programs and Least-Squares problems as a special case. Convex Programs can be solved efficiently by methods such as interior point methods, subgradient, cutting plane and ellipsoid methods. Solving a general Nonlinear Programming (not linear and also not known to be convex) problem is a challenging task as no specific approach is known. Therefore a general way to solve these problems is to find the local optimal solution. Convex Optimization plays role in Nonlinear Programming Problems through finding the lower bounds. Several relaxation methods involve replacing the nonlinear constraints with the loose convex constraints to find the lower bound on the NLP problems.

algorithms involve an iterative solver. Therefore we need certain criterion for stopping the algorithm and also we need to predict the performance of the algorithm. These iterative solvers depend on the Condition Number of the Hessian of the objective ($\omega$($\nabla$²f(x))) near the optimal point and the rate of convergence depends on the eccentricity of the condition number. Moreover, these algorithms use the norm of the gradient of the objective function as a stopping criterion. This presentation provides a complete analysis of the derivation of the stopping criterion and discusses the condition number of $\alpha$-sublevel sets and relation between the $\omega$($\nabla$²f(x)).

line search methods. There are two kinds of line search algorithms, the exact & inexact line search. Exact line search finds the step lenght for which the function f(x+t$\delta$x) is minimized i.e. t=argmin_{(s>0)}f(x+s$\delta$x). The inexact line search just finds the step length such that the function f(x+t$\delta$x) is approximately minimized. The most popular one is the backtracking line search algorithm which depends on two constants, $\alpha$ & $\bdta$ . Gradient descent is an algorithm which uses the $\deltax=\nabla f(x)$, i.e. the direction is the direction of maximum decrease of f(x). It uses the line search (exact or backtracking) to find the step-length to move in this direction. The performance of the algorithm is dependent on the sublevel sets of f(x) near optimum. This presentation shows how to practically apply and solve the Backtracking Line Search Algorithm.

to move in the direction $\nabla$x. Exact line-search and backtracking line-search may be used to find the step length. The rate of convergence for the Gradient Descent method depends on the eccentricity of the sublevel sets, which is also the condition number of the Hessian of the function at the optimum i.e. $\omega$($\nabla$²f(x*)). This presentation discusses the convergence analysis and condition number dependence of the rate of convergence, for the Gradient Descent methods with Line-Search as the step-length finding method.

Steepest Descent is one of the algorithms for solving Unconstrained Minimization problems. It is an iterative algorithm, where at each iteration, algorithm finds a descent direction, which is a steepest descent (under some norm) direction, with lenth of the descent vector constrained by some valid norm (||.||). Different norms, for constraining the length of the descent direction,

length of the descent direction, results in different descent algorithms. The L2-Norm results in Gradient Descent Algorithm, the Quadratic Norm (||z||p=(z&prima;Pz)½) results in an algorithm equivalent to Gradient Descent Algorithm after the change of coordinate system (y=P½z). This may be helpful, if after the change of coordiantes, the eccentricity (or condition number of Hessian, $\omega$($\nabla4²f(x))) of the sublevel sets is moderate. If L1 is used to constraint the length, ||v||_{1}<=1, then it results in the coordinate descent algorithm. All these algorithms can be derived by solving an equivalent constrained optimization problem for normalized steepest descent direction: $\Delta x_{nsd}$=argminv{$\nabla$f(x)'v | ||v||²<=1} instead of solving the exact problem $\Delta x_{nsd}=argminv_v{ \nabla f(x)'v| ||v||<=1}$. Adding the constraint ||v||²<=1 simplifies the derivation. The convergence of the steepest descent under any general norm is at least linear. This lecture, derives each of the algorithm for L1-Norm, L2-Norm and the Quadratic Norm and performs the Convergence analysis of Steepest Descent Methods under any general norm.

Part - 1: Mixed-Integer Programming

- An Introduction to Mixed Integer Linear Programs: Time To Warmup. Video Lecture - Coming Soon. PDF-Presentation - Coming Soon. MATLAB Code - Applicale.

Part - 2: Relaxations and Reformultions

Part - 3: Applications: Coalition Formation & Path Planning

Subgradient Methods in Network Resource Allocation: Rate Analysis - Video Lecture - Coming Soon. MATLAB Code: Download the SSMS Package