• 10

A PHP Error was encountered

Severity: Notice

Message: Undefined index: userid

Filename: views/question.php

Line Number: 191


File: /home/prodcxja/public_html/questions/application/views/question.php
Line: 191
Function: _error_handler

File: /home/prodcxja/public_html/questions/application/controllers/Questions.php
Line: 433
Function: view

File: /home/prodcxja/public_html/questions/index.php
Line: 315
Function: require_once

name Punditsdkoslkdosdkoskdo

Python unzipping stream of bytes?

Here is the situation:

  • I get gzipped xml documents from Amazon S3

    import boto
    from boto.s3.connection import S3Connection
    from boto.s3.key import Key
    conn = S3Connection('access Id', 'secret access key')
    b = conn.get_bucket('mydev.myorg')
    k = Key(b)
  • I read them in file as

    import gzip
    f = open('/tmp/p', 'w')
    r = gzip.open('/tmp/p', 'rb')
    file_content = r.read()


How can I unzip the streams directly and read the contents?

I do not want to create temp files, they don't look good.

Yes, you can use the zlib module to decompress byte streams:

import zlib

def stream_gzip_decompress(stream):
    dec = zlib.decompressobj(32 + zlib.MAX_WBITS)  # offset 32 to skip the header
    for chunk in stream:
        rv = dec.decompress(chunk)
        if rv:
            yield rv

The offset of 32 signals to the zlib header that the gzip header is expected but skipped.

The S3 key object is an iterator, so you can do:

for data in stream_gzip_decompress(k):
    # do something with the decompressed data
  • 36
Reply Report
      • 2
    • Thank you for the reply, @MartijnPieters! Strangely, that doesn't seem to have solved the problem. (Apologies for the following 1 liner) dec = zlib.decompressobj(32 + zlib.MAX_WBITS); for chunk in app.s3_client.get_object(Bucket=bucket, Key=key)["Body"].iter_chunks(2 ** 19): data = dec.decompress(chunk); print(len(data)); Seems to output 65505 then 0, 0, 0, 0, 0, .... could this be something to do with iter_chunks()?
      • 2
    • @WillJones: please post a separate question for that, this is not something we can hash out in comments. Sorry!

I had to do the same thing and this is how I did it:

import gzip
f = StringIO.StringIO()
f.seek(0) #This is crucial
gzf = gzip.GzipFile(fileobj=f)
file_content = gzf.read()
  • 10
Reply Report

For Python3x and boto3-

So I used BytesIO to read the compressed file into a buffer object, then I used zipfile to open the decompressed stream as uncompressed data and I was able to get the datum line by line.

import io
import zipfile
import boto3
import sys

s3 = boto3.resource('s3', 'us-east-1')

def stream_zip_file():
    count = 0
    obj = s3.Object(
    buffer = io.BytesIO(obj.get()["Body"].read())
    print (buffer)
    z = zipfile.ZipFile(buffer)
    foo2 = z.open(z.infolist()[0])
    line_counter = 0
    for _ in foo2:
        line_counter += 1
    print (line_counter)

if __name__ == '__main__':
  • 5
Reply Report
    • I noticed that the memory consumption increases significantly when we do buffer = io.BytesIO(obj.get()["Body"].read()). However read(1024) reading a certain portion of the data keeps the memory usage low!

You can try PIPE and read contents without downloading file

    import subprocess
    c = subprocess.Popen(['-c','zcat -c <gzip file name>'], shell=True, stdout=subprocess.PIPE,         stderr=subprocess.PIPE)
    for row in c.stdout:
      print row

In addition "/dev/fd/" + str(c.stdout.fileno()) will provide you FIFO file name (Named pipe) which can be passed to other program.

  • 0
Reply Report

Trending Tags