且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

如何找到所有基于图像的PDF?

更新时间:2023-12-05 15:45:28

为派对迟到,这里是一个简单的解决方案,意味着已经包含字体的pdf文件不仅仅是基于图像的:

  find ./ -name* .pdf-print0 | xargs -0 -I {} \ 
bash -c'export file ={}; \
if [$(pdffonts$ file2> / dev / null | \
wc -l)-lt 3];然后回显$ file; f'




  • pdffonts列出PDF文件中的所有嵌入字体。如果包含可搜索的文本,它也必须包含字体,所以pdffonts将列出它们。检查结果是否少于三行是因为pdffonts的头是2行。所以所有结果低于3行都没有嵌入字体。 AFAIK,不应该有假阳性,这更像是问pdffonts开发者的问题。



作为单行

   

说明:
pdffonts file.pdf $如果pdf包含文本,c $ c>将显示超过2行。
输出不包含文本的所有pdf文件的文件名。



我的具有相同功能的OCR项目在Github中 deajan / pmOCR


I have many PDF documents in my system, and I notice sometimes that documents are image-based without editing capability. In this case, I do OCR for better search in Foxit PhantomPDF where you can do OCR in multiple files. I would like to find all PDF documents of mine which are image-based.

I do not understand how the PDF reader can recognize that the document's OCR is not textual. There must be some fields which these readers access. This can be accessed in terminal too. This answer gives open proposals how to do it in the thread Check if a PDF file is a scanned one:

Your best bet might be to check to see if it has text and also see if it contains a large pagesized image or lots of tiled images which cover the page. If you also check the metadata this should cover most options.

I would like to understand better how you can do this effectively, since if there exists some metafield, then it would be easy. However, I have not found such a metafield. I think the most probable approach is to see if the page contains pagesized image which has OCR for search because it is effective and used in some PDF readers already. However, I do not know how to do it.

Edge Detection and Hugh Transform about the answer

In Hugh transform, there are specifically chosen parameters in the hyper-square of the parameter space. Its complexity $O(A^{m-2})$ where m is the amount of parameters where you see that with more than there parameters the problem is difficult. A is the size of the image space. Foxit reader is using most probably 3 parameters in their implementation. Edges are easy to detect well which can ensure the efficiency and must be done before Hugh transform. Corrupted pages are simply ignored. Other two parameters are still unknown but I think they must be nodes and some intersections. How these intersections are computed is unknown? The formulation of the exact problem is unknown.

Testing Deajan's answer

The command works in Debian 8.5 but I could not manage to get it work initially in Ubuntu 16.04

masi@masi:~$ find ./ -name "*.pdf" -print0 | xargs -0 -I {} bash -c 'export file="{}"; if [ $(pdffonts "$file" 2> /dev/null | wc -l) -lt 3 ]; then echo "$file"; fi'
./Downloads/596P.pdf
./Downloads/20160406115732.pdf
^C

OS: Debian 8.5 64 bit
Linux kernel: 4.6 of backports
Hardware: Asus Zenbook UX303UA

Being late for the party, here's a simple solution implying that pdf files already containing fonts aren't image based only:

find ./ -name "*.pdf" -print0 | xargs -0 -I {}      \ 
    bash -c 'export file="{}";                          \
    if [ $(pdffonts "$file" 2> /dev/null | \
    wc -l) -lt 3 ]; then echo "$file"; fi'

  • pdffonts lists all embedded fonts in a PDF file. If the contains searchable text, it also must contain fonts, so pdffonts will list them. Checking if result has less than three lines is because pdffonts' header is 2 lines. So all results lower than 3 lines don't have embedded fonts. AFAIK, there shouldn't be false positives altough this is more a question to ask pdffonts developers.

As one-liner

find ./ -name "*.pdf" -print0 | xargs -0 -I {} bash -c 'export file="{}"; if [ $(pdffonts "$file" 2> /dev/null | wc -l) -lt 3 ]; then echo "$file"; fi'

Explanation: pdffonts file.pdf will show more than 2 lines if pdf contains text. Outputs filenames of all pdf files that don't contain text.

My OCR project which has the same feature is in Github deajan/pmOCR.