Support Vector Data Descriptions and k-Means Clustering: One Class?

Nico Gornitz, Luiz Alberto Lima, Klaus Robert Muller, Marius Kloft, Shinichi Nakajima

Research output: Contribution to journalArticlepeer-review

25 Citations (Scopus)


We present ClusterSVDD, a methodology that unifies support vector data descriptions (SVDDs) and k-means clustering into a single formulation. This allows both methods to benefit from one another, i.e., by adding flexibility using multiple spheres for SVDDs and increasing anomaly resistance and flexibility through kernels to k-means. In particular, our approach leads to a new interpretation of k-means as a regularized mode seeking algorithm. The unifying formulation further allows for deriving new algorithms by transferring knowledge from one-class learning settings to clustering settings and vice versa. As a showcase, we derive a clustering method for structured data based on a one-class learning scenario. Additionally, our formulation can be solved via a particularly simple optimization scheme. We evaluate our approach empirically to highlight some of the proposed benefits on artificially generated data, as well as on real-world problems, and provide a Python software package comprising various implementations of primal and dual SVDD as well as our proposed ClusterSVDD.

Original languageEnglish
Article number8052229
Pages (from-to)3994-4006
Number of pages13
JournalIEEE Transactions on Neural Networks and Learning Systems
Issue number9
Publication statusPublished - 2018 Sept


  • Anomaly detection
  • clustering
  • k-means
  • one-class classification
  • support vector data description (SVDD)

ASJC Scopus subject areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence


Dive into the research topics of 'Support Vector Data Descriptions and k-Means Clustering: One Class?'. Together they form a unique fingerprint.

Cite this