First International Workshop on Automated Forensic Handwriting Analysis (AFHA 2011)
17-18 September 2011, Beijing, China
  Home Call for Papers Submit / Review Registration Sponsors  
  Program SigComp2011 Organization Venue & Accomodation ICDAR 2011  

Signature Verification Competition for On- and Offline Skilled Forgeries (SigComp2011)

Following the successful Signature Verification Competition on Skilled Forgeries (SigComp2009) organized at ICDAR 2009, we invite all researchers and developers in the field of signature verification to register and participate in a new competition (SigComp2011). Besides the skilled forgery detection on Western signatures, this competition will introduce a novel set of Chinese signatures. The plan is to evaluate the validity of the systems on both Western and Chinese signatures.

---- News ----

The final deadline for submitting the systems is May, 15th 2011. Please send your executables, the description of your approach and a readme of your program in a single ZIP-file to Marcus Liwicki via Email.
There have been questions on how to compute the log-likelihood ratios and how the systems will be evaluated. We will build our evaluation on the framework FoCal which can be found at the site of Niko Brummer and calculate the Cllr-Score. Note that even if your system achives a perfect score on the training set, the LR would not be infinity since the between-source variability of the reference population (which would be all people of the world trying to forge a signature) would be unknown. Considering Figure 5 in Gonzalez et. al's paper, typical systems compute just the evidence E, not the likelihood-ratio LR. Therefore we suggest to apply some Gaussian smoothing on the LR-curves. Typical useful values will range from 10^-10 to 10^10 for the LR.

The data set is available for download here.
Before downloading, please read the disclaimer, where also the password can be found.
Please also do not forget do send a registration email to Marcus Liwicki, together with your name, affiliation, and contact information.

The information how to acces the data from the ICDAR 2009 is available here.


The Netherlands Forensic Institute and the Institute for Forensic Science in Shanghai are in search of a signature verification system that can be implemented in forensic casework and research to objectify results. We want to bridge the gap between recent technology developments and forensic casework. In collaboration with the German Research Center for Artificial Intelligence we have organized a new competition in which we ask you to compare questioned signatures against a set of reference signatures.


The objective of this competition is to allow researchers and practitioners from academia and industries to compare their performance in signature verification on two new unpublished forensic-like datasets (Dutch and Chinese signatures). Skilled forgeries and genuine signatures were collected while writing on a paper attached to a digitizing tablet. The collected signature data are available in an on- and offline format. Participants can choose to compete on the online data or offline data only, or can choose to combine both data formats.

Similar to the last ICDAR competition, our aim is to compare different signature verification algorithms systematically for the forensic community, with the objective to establish a benchmark on the performance of such methods (providing new unpublished forensic-like datasets with authentic and skilled forgeries in both on- and offline format).


The Forensic Handwriting Examiner (FHE) weighs the observations in light of two hypotheses H1 vs. H2:

  • Hypothesis 1: The questioned signature is an authentic signature of the reference writer;
  • Hypothesis 2: The questioned signature is written by a writer other than the reference writer

The interpretation of the observed similarities/differences in signature analysis is not as straightforward as in other forensic disciplines such as DNA or fingerprint evidence, because signatures are a product of a behavioral process that can be manipulated by the reference writer himself, or by another person than the reference writer. In this competition only such cases of H2 exist, where the forger is not the reference writer. This continues the last successful competition on ICDAR 2009.

Description of the Data Sets

In the test phase, signatures can either be genuine: written by the reference writer, or simulated: simulation of the signature by another writer than the reference writer. The collection contains offline and online signature samples. The offline datasets will be constituted of PNG images, scanned at 400 dpi, RGB color. The online dataset will consist of ascii files with the format: X, Y, Z (per line). Sampling rate 200 Hz, resolution 2000 lines/cm, precision of 0.25 mm. Collection device: WACOM Intuos3 A3 Wide USB Pen Tablet. Collection software: MovAlyzer. A preprinted paper was used with 12 numbered boxes (width 59mm, height 23mm). The preprinted paper was placed underneath the blank writing paper. Four extra blank pages were added underneath the first two pages to ascertain a soft writing surface.

Dutch dataset

Total set: 1790 signatures.
Training set: data of 10 reference writers and some skilled forgeries of these signatures.
Additionally, the public data of the 2009 competition may be used (contact Elisa van den Heuvel)
Test set: #.

Total set: 960 signatures.
Training set: data of 10 reference writers and some skilled forgeries of these signatures.
Test set: #.


The system will get as an input parameter the mode (online or offline), the questioned signature and up to 12 authentic signatures from the reference writer. In this competition we ask to produce a comparison score (e.g. a degree of similarity or difference), and the evidential value of that score, expressed as the ratio of the probabilities of finding that score when Hypothesis 1 is true and when Hypothesis 2 is true (i.e. the likelihood ratio).
For evaluation we will apply analysis methods used by FHEs to assess the value of the evidence, i.e., we will take the likelihood ratios more into account than in previous competitions.
Note that this competition will introduce a slight paradigm shift here (the 'decision paradigm' to evidential value) that impacts the task in the competition. The issue is not the pure classification, since (a) you cannot know the probability of authorship based on handwriting comparison alone, (b) classification brings with it the probability of an error of which the cost is undefined, and (c) you were never asked to decide on authorship. The true issue is to find the likelihood ratio (LR) for a comparison, given the hypotheses of the questioned handwriting having the reference author or a different author. The relevant graphs would therefore show histograms of some measure of similarity (or difference; or any continuous measure that used to be compared to some threshold when it was a classification task) for intra-source and inter-source comparisons (corresponding to the hypotheses of same author vs different author). Such graphs would make it possible to assess the value of the evidence (and discrimination), which is what really matters.
Therefore, in this competition we ask the participants to have a closer look on the likelihood score.

Important dates

Trial test data provision: 20 February 2011 (contact Marcus Liwicki)
Competition registration deadline: 15 May 2011
Submission deadline of the verification tools: 15 May 2011
Results computation: May 2011 (announcement at the end of May)
Summary of the competition: 15 June 2011