Abstract: Iterative regularization is a classic idea in regularization theory, that has recently become popular in machine learning. On the one hand, it allows the designing of efficient algorithms controlling at the same time numerical and statistical accuracy. On the other hand, it allows shedding light on the learning curves observed while training neural networks. In this paper, we focus on iterative regularization in the context of classification. After contrasting this setting with regression and inverse problems, we develop an iterative regularization approach based on the hinge loss function. More precisely we consider a diagonal approach for a family of algorithms for which we prove convergence as well as rates of convergence. Our approach compares favorably with other alternatives, as confirmed also in numerical simulations.
At the Balkan Countries Health Business Forum 2025, held on April 25–27 in Edirne, Turkey, the Rector of the University of Thessaly, Professor Charalambos Billinis, attended as a guest of the Rector of Trakya University,… more