You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On Open Prices, users upload images of receipt to serve as a "proof" for the price of the product they bought.
Currently, after uploading the proof image, users enter prices of the product one at a time, either by scanning the barcode using the app (web or mobile) or by indicating the category of the product (for raw products, such as vegetables or fruits).
We wish to automate the task of extracting informations from receipts, using a Document AI model such as LayoutLM.
The information we wish to extract are the following:
date/hour of purchase
Name of the shop
address of the shop
the items bought. For each item, we wish to have the following information:
the quantity of items (for quantifiable products)
the price per item (for quantifiable products)
the total price paid (after discount)
the price per kg (or equivalent unit, for products sold per weight)
the label (=name) of the product on the receipt
A reference dataset exists for extraction from receipt images: https://github.com/clovaai/cord.
This dataset however mainly contains receipts from Indonesia, we should investigate whether the models works well with Open Prices data.
Note that we now have OCR data for all images in Open Prices, by just replacing in the image URL the file extension (ex: .webp) by .json.gz. To deal with Google Cloud Vision OCR files, look at openfoodfacts-python library: https://github.com/openfoodfacts/openfoodfacts-python/blob/develop/openfoodfacts/ocr.py#L295.
The first task is to investigate whether models trained on the CORD dataset (such as LayoutLMv3) work well on Open Prices receipt images.
The text was updated successfully, but these errors were encountered:
On Open Prices, users upload images of receipt to serve as a "proof" for the price of the product they bought.
Currently, after uploading the proof image, users enter prices of the product one at a time, either by scanning the barcode using the app (web or mobile) or by indicating the category of the product (for raw products, such as vegetables or fruits).
An example of such a receipt can be found here: https://prices.openfoodfacts.org/img/0019/B5RGQnlCPI.webp. For more receipts, please look at the Open Prices dataset on Hugging Face.
We wish to automate the task of extracting informations from receipts, using a Document AI model such as LayoutLM.
The information we wish to extract are the following:
A reference dataset exists for extraction from receipt images: https://github.com/clovaai/cord.
This dataset however mainly contains receipts from Indonesia, we should investigate whether the models works well with Open Prices data.
Note that we now have OCR data for all images in Open Prices, by just replacing in the image URL the file extension (ex:
.webp
) by.json.gz
. To deal with Google Cloud Vision OCR files, look atopenfoodfacts-python
library: https://github.com/openfoodfacts/openfoodfacts-python/blob/develop/openfoodfacts/ocr.py#L295.The first task is to investigate whether models trained on the CORD dataset (such as LayoutLMv3) work well on Open Prices receipt images.
The text was updated successfully, but these errors were encountered: