Home
Welcome to the documentation of the atr-ner-eval
library!
It provides an easy way to compute a wide variety of metrics to evaluate automatic workflows for Automatic Text Recognition (ATR) and Named Entity Recognition (NER).
Use case
The main application of this toolbox is the evaluation of ATR-NER models.
From this document image
The following text and entities are predicted automatically (with mistakes):
You see , it was , opparently , through a mistake on
Guy
's pert that we missed seeing the flemingoes our
first
morning on the island . "
" What harm could possibly save come to
Forrest through ) Sir
John
's nonsense ? "
Piens
could barely have spoken with more contempt .
A bully like that respects anyone who ceres to stand up to him .. "
But the actual text and entities are as follows:
You see , it was , apparently , through a mistake on
Guy
's part that we missed seeing the flamingoes our
first
morning
on the island . " " What harm could possibly have come to
Forrest
through Sir
John
's nonsense ? "
Piers
could hardly have spoken with more contempt .
A bully like that respects anyone who dares to stand up to him . "
This library gives insights to easily evaluate the performance of the automatic workflow with respects to a set of manually annotated examples.
Get started
Get started with the library here.