Abstract
The task of structured output prediction deals with learning general functional dependencies between arbitrary input and output spaces. In this context, two loss-sensitive formulations for maximum-margin training have been proposed in the literature, which are referred to as margin and slack rescaling, respectively. The latter is believed to be more accurate and easier to handle. Nevertheless, it is not popular due to the lack of known efficient inference algorithms; therefore, margin rescaling - which requires a similar type of inference as normal structured prediction - is the most often used approach. Focusing on the task of label sequence learning, we here define a general framework that can handle a large class of inference problems based on Hamming-like loss functions and the concept of decomposability for the underlying joint feature map. In particular, we present an efficient generic algorithm that can handle both rescaling approaches and is guaranteed to find an optimal solution in polynomial time.
Original language | English |
---|---|
Article number | 6617696 |
Pages (from-to) | 870-881 |
Number of pages | 12 |
Journal | IEEE Transactions on Neural Networks and Learning Systems |
Volume | 25 |
Issue number | 5 |
DOIs | |
Publication status | Published - 2014 May |
Keywords
- Dynamic programming
- gene finding
- hidden Markov SVM
- inference
- label sequence learning
- margin rescaling
- slack rescaling
- structural support vector machines (SVMs)
- structured output.
ASJC Scopus subject areas
- Software
- Computer Science Applications
- Computer Networks and Communications
- Artificial Intelligence