F. J. Aragón, R. Campoy García, P. T. Vuong
The Boosted Difference of Convex functions Algorithm (BDCA) has been recently introduced to accelerate the performance of the classical Difference of Convex functions Algorithm (DCA). This acceleration is achieved thanks to an extrapolation step from the point computed by DCA via a line search procedure. However, this line search will only terminate finitely when the first function in the DC decomposition is differentiable. Thus, BDCA cannot be efficiently applied to constrained DC problems. In this talk, we present an extension of BDCA that can be applied to DC programs with linear constraint and we show its convergence under mild assumptions. We also present some numerical experiments where we compare the performance of DCA and BDCA on some challenging problems.
Palabras clave: Optimization, Difference of convex functions, Constrained DC programming
Programado
GT13.OPTCONT6 Sesión Invitada
7 de noviembre de 2023 11:40
CC1: Auditorio