Python使用PDFMiner解析PDF代码实例

时间:2021-05-22

近期在做爬虫时有时会遇到网站只提供pdf的情况,这样就不能使用scrapy直接抓取页面内容了,只能通过解析PDF的方式处理,目前的解决方案大致只有pyPDF和PDFMiner。因为据说PDFMiner更适合文本的解析,而我需要解析的正是文本,因此最后选择使用PDFMiner(这也就意味着我对pyPDF一无所知了)。

首先说明的是解析PDF是非常蛋疼的事,即使是PDFMiner对于格式不工整的PDF解析效果也不怎么样,所以连PDFMiner的开发者都吐槽PDF is evil. 不过这些并不重要。官方文档在此:http://parison of the first 4 (or 2) bytes""" file_type = None bytes_as_hex = b2a_hex(stream_first_4_bytes) if bytes_as_hex.startswith('ffd8'): file_type = '.jpeg' elif bytes_as_hex == '89504e47': file_type = '.png' elif bytes_as_hex == '47494638': file_type = '.gif' elif bytes_as_hex.startswith('424d'): file_type = '.bmp' return file_typedef save_image (lt_image, page_number, images_folder): """Try to save the image data from this LTImage object, and return the file name, if successful""" result = None if lt_image.stream: file_stream = lt_image.stream.get_rawdata() if file_stream: file_ext = determine_image_type(file_stream[0:4]) if file_ext: file_name = ''.join([str(page_number), '_', lt_image.name, file_ext]) if write_file(images_folder, file_name, file_stream, flags='wb'): result = file_name return result###### Extracting Text###def to_bytestring (s, enc='utf-8'): """Convert the given unicode string to a bytestring, using the standard encoding, unless it's already a bytestring""" if s: if isinstance(s, str): return s else: return s.encode(enc)def update_page_text_hash (h, lt_obj, pct=0.2): """Use the bbox x0,x1 values within pct% to produce lists of associated text within the hash""" x0 = lt_obj.bbox[0] x1 = lt_obj.bbox[2] key_found = False for k, v in h.items(): hash_x0 = k[0] if x0 >= (hash_x0 * (1.0-pct)) and (hash_x0 * (1.0+pct)) >= x0: hash_x1 = k[1] if x1 >= (hash_x1 * (1.0-pct)) and (hash_x1 * (1.0+pct)) >= x1: # the text inside this LT* object was positioned at the same # width as a prior series of text, so it belongs together key_found = True v.append(to_bytestring(lt_obj.get_text())) h[k] = v if not key_found: # the text, based on width, is a new series, # so it gets its own series (entry in the hash) h[(x0,x1)] = [to_bytestring(lt_obj.get_text())] return hdef parse_lt_objs (lt_objs, page_number, images_folder, text=[]): """Iterate through the list of LT* objects and capture the text or image data contained in each""" text_content = [] page_text = {} # k=(x0, x1) of the bbox, v=list of text strings within that bbox width (physical column) for lt_obj in lt_objs: if isinstance(lt_obj, LTTextBox) or isinstance(lt_obj, LTTextLine): # text, so arrange is logically based on its column width page_text = update_page_text_hash(page_text, lt_obj) elif isinstance(lt_obj, LTImage): # an image, so save it to the designated folder, and note its place in the text saved_file = save_image(lt_obj, page_number, images_folder) if saved_file: # use html style <img /> tag to mark the position of the image within the text text_content.append('<img src="'+os.path.join(images_folder, saved_file)+'" />') else: print >> sys.stderr, "error saving image on page", page_number, lt_obj.__repr__ elif isinstance(lt_obj, LTFigure): # LTFigure objects are containers for other LT* objects, so recurse through the children text_content.append(parse_lt_objs(lt_obj, page_number, images_folder, text_content)) for k, v in sorted([(key,value) for (key,value) in page_text.items()]): # sort the page_text hash by the keys (x0,x1 values of the bbox), # which produces a top-down, left-to-right sequence of related columns text_content.append(''.join(v)) return '\n'.join(text_content)###### Processing Pages###def _parse_pages (doc, images_folder): """With an open PDFDocument object, get the pages and parse each one [this is a higher-order function to be passed to with_pdf()]""" rsrcmgr = PDFResourceManager() laparams = LAParams() device = PDFPageAggregator(rsrcmgr, laparams=laparams) interpreter = PDFPageInterpreter(rsrcmgr, device) text_content = [] for i, page in enumerate(PDFPage.create_pages(doc)): interpreter.process_page(page) # receive the LTPage object for this page layout = device.get_result() # layout is an LTPage object which may contain child objects like LTTextBox, LTFigure, LTImage, etc. text_content.append(parse_lt_objs(layout, (i+1), images_folder)) return text_contentdef get_pages (pdf_doc, pdf_pwd='', images_folder='/tmp'): """Process each of the pages in this pdf file and return a list of strings representing the text found in each page""" return with_pdf(pdf_doc, _parse_pages, pdf_pwd, *tuple([images_folder]))a = open('a.txt','a')for i in get_pages('/home/jamespei/nova.pdf'): a.write(i)a.close()

这段代码重点在于第128行,可以看到PDFMiner是一种基于坐标来解析的框架,PDF中能解析的组件全都包括上下左右边缘的坐标,如x0 = lt_obj.bbox[0]就是lt_obj元素的左边缘的坐标,同理x1则为右边缘。以上代码的意思就是把所有x0且x1的坐标相差在20%以内的元素分成一组,这样就实现了从PDF文件中定向抽取内容。

----------------补充--------------------

有一个需要注意的地方,在解析有些PDF的时候会报这样的异常:pdfminer.pdfdocument.PDFEncryptionError: Unknown algorithm: param={'CF': {'StdCF': {'Length': 16, 'CFM': /AESV2, 'AuthEvent': /DocOpen}}, 'O': '\xe4\xe74\xb86/\xa8)\xa6x\xe6\xa3/U\xdf\x0fWR\x9cPh\xac\xae\x88B\x06_\xb0\x93@\x9f\x8d', 'Filter': /Standard, 'P': -1340, 'Length': 128, 'R': 4, 'U': '|UTX#f\xc9V\x18\x87z\x10\xcb\xf5{\xa7\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', 'V': 4, 'StmF': /StdCF, 'StrF': /StdCF}

从字面意思来看是因为这个PDF是一个加密的PDF,所以无法解析 ,但是如果直接打开PDF却是可以的并没有要求输密码什么的,原因是这个PDF虽然是加过密的,但密码是空,所以就出现了这样的问题。

解决这个的问题的办法是通过qpdf命令来解密文件(要确保已经安装了qpdf),要想在python中调用该命令只需使用call即可:

from subprocess import callcall('qpdf --password=%s --decrypt %s %s' %('', file_path, new_file_path), shell=True)

其中参数file_path是要解密的PDF的路径,new_file_path是解密后的PDF文件路径,然后使用解密后的文件去做解析就OK了

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持。

声明:本页内容来源网络,仅供用户参考;我单位不保证亦不表示资料全面及准确无误,也不保证亦不表示这些资料为最新信息,如因任何原因,本网内容或者用户因倚赖本网内容造成任何损失或损害,我单位将不会负任何法律责任。如涉及版权问题,请提交至online#300.cn邮箱联系删除。

相关文章