Post without Account — your post will be reviewed, and if appropriate, posted under Anonymous. You can also use this link to report any problems registering or logging in.

RT 113516 - After slurping the PDF file into memory, a copy is immediately made

  • 1 Replies
  • 2216 Views
*

Offline Phil

  • Global Moderator
  • Sr. Member
  • *****
  • 430
    • View Profile
Sat Apr 02 08:00:06 2016 futuramedium [...] yandex.ru - Ticket created
Subject:    After slurping the PDF file into memory, a copy is immediately made -- is it necessary?
 
The PDF::API2->open() reads the entire file, and then passes this scalar to 'open_scalar' i.e. a copy is made. Then:
Code: [Select]
D:\>perl -MPDF::API2 -Mstrict -wE "my $f = PDF::API2->open('520_Mb_file.pdf')"
Out of memory!

, obviously. I think the 'open_scalar' could be left as is because it always was there. An 'open_scalar_ref' could be added, with minor obvious modifications, and 'open' re-written to use 'open_scalar_ref' instead of 'open_scalar'. Then the above command completes without error, and everything else seems to work OK.

Further, I think, because PDF::API2::Basic::PDF::File->open() accepts an 'update' parameter (looks like a contract to never re-use (if 0) this internal scalar, never to 'stringify', etc.), and also because it can accept filename instead of in-memory file filehandle -- there could be some kind of flag implemented, to use a disk file instead of slurping a file and using in-memory file. Performance will suffer, but at least, for really huge files, we would be able to extract pages, resources, etc.

*

Offline Phil

  • Global Moderator
  • Sr. Member
  • *****
  • 430
    • View Profile
 PhilterPaper commented on Aug 25

on Fri Aug 18 13:17:16 2017 steve [...] deefs.net - Correspondence added

I've just spent some time looking into this. The core PDF code in PDF::API2::Basic should work fine if it's passed a filehandle -- I don't think there's any code in there that needs a copy or an in-memory version of the PDF.

It should be possible to modify PDF::API2->open() to pass the filehandle to PDF::API2::Basic::PDF::File rather than slurping it into memory and calling open_scalar. The only other subs that may need to be changed are finishobjects, save, saveas, and stringify, which are all in PDF/API2.pm as well. I don't think any other file would need to be touched, so it should be a fairly straightforward patch, and I've just renamed a couple of variables and cleaned up a bit of code to make it easier to implement.

My inclination is that the default/expected behavior should be to read the file as needed, with an option to slurp it into memory for performance reasons, rather than having the current behavior be the default.

 PhilterPaper commented on Sep 8

on Thu Aug 31 07:22:31 2017 futuramedium [...] yandex.ru - Correspondence added

Thank you for your attention to the problem. I hope I'm finally over with worrying about 32-bit OSes, so slurping is now less of an issue. Maybe for others, too. Just a note for your list of priorities :).

Please note, however, that part of what I suggested above (passing reference instead of scalar) was safe and cheap (no new tests required) way to reduce to 50% memory footprint while reading any file.

<aside>
Memory and time required to read/slurp (be it 50% or original 100%) may be tiny compared to efforts to parse and build Perl structures for complex files. E.g., using brand-new i5-7500 machine:

Quote
perl -MPDF::API2 -E "PDF::API2-> open('PDF Reference 1.7.pdf'); say time - $^T"
366

Then why care about slurping? OK, maybe new ticket should be opened to address such performance. </aside>

While, what you are suggesting may require a bit more planning and effort. E.g., now I think that benefits of no-slurping will be void if proper incremental update isn't implemented. The current "update" should be re-written, to append to original file. + It should work with multiple updates while object persists. + Issue when updating a file with XRefStream should be addressed, too.