Safe Feature Elimination for Non-Negativity Constrained Convex Optimization
Stephen Becker and I have a new paper in Journal of Optimization and Applications entitled “Safe Feature Elimination for Non-Negativity Constrained Convex Optimization”. We modified the work of El Ghaoui et al. on safe feature elimination (SAFE) for \(\ell_1\)-regularized problems to work on non-negativity constrained convex problems. Our method uses an accurate, but not optimal, primal-dual feasible pair to produce bounds on the dual term in the complementary slackness relation. By bounding the dual term away from zero, we can infer that the corresponding coordinate of a primal optimal point must be zero, and hence can be eliminated from the problem. We assume the primal objective is \(L\)-smooth so the dual is \(1/L\)-strongly concave, enabling us to find such bounds.
Our work was motivated by a large set of extremely ill-conditioned non-negative least-squares (NNLS) problems arising in a super-resolution fluorescence microscopy problem. For instance, to form one image we need to solve 40000 NNLS problems and combine the outputs to form the full, super-resolved image. Using our SAFE methods we are able to certify a posteriori that each of the NNLS problems has a unique solution, and therefore the final image is unique.
Here’s a PDF of our manuscript. The final authenticated version is available on JOTA’s site.