The need for fast and robust optimization algorithms are of critical
importance in all areas of machine learning. This paper treats the task of
designing optimization algorithms as an optimal control problem. Using regret
as a metric for an algorithm's performance, we study the existence, uniqueness
and consistency of regret-optimal algorithms. By providing first-order
optimality conditions for the control problem, we show that regret-optimal
algorithms must satisfy a specific structure in their dynamics which we show is
equivalent to performing dual-preconditioned gradient descent on the value
function generated by its regret. Using these optimal dynamics, we provide
bounds on their rates of convergence to solutions of convex optimization
problems. Though closed-form optimal dynamics cannot be obtained in general, we
present fast numerical methods for approximating them, generating optimization
algorithms which directly optimize their long-term regret. Lastly, these are
benchmarked against commonly used optimization algorithms to demonstrate their
effectiveness.