Giter Club home page Giter Club logo

Comments (81)

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
You can use snappy_unittest for this; it's crude, but it works.

What's the intended use case? For disk-to-disk compression, usually you have 
CPU time for something like gzip -1 instead.

Original comment by [email protected] on 18 Apr 2011 at 7:12

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
It would just be useful in the same way that lzop is useful, as a general 
pipeline tool.  E.g. disk-to-different-disk, process-to-ssh, etc.

Original comment by [email protected] on 22 Apr 2011 at 2:40

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024

Original comment by [email protected] on 26 Apr 2011 at 12:56

  • Added labels: Type-Enhancement
  • Removed labels: Type-Defect

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Attached patch:

1) Adds streaming support, at least for streams created with the current 
compressor
2) Creates command line tools =snzip= and =snunzip=.  Both work solely with 
standard input and output, making them most useful for pipes.

Resulting tool passes basic sanity checks (compress/decompress) and seems to 
have acceptable performance.  Has the limitation that for files larger than 
64K, the reported file size will differ from the actual file size (since the 
header must be output before the entire stream is recieved).

My C++ is rusty to nonexistant, so style/culture fixes are welcome.

Original comment by [email protected] on 17 Jun 2011 at 9:24

Attachments:

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
I made another patch, snzip.dif, which makes snzip.
It has similar options as gzip and bzip2 have as follows.

To compress file.tar:
 snzip file.tar

  Compressed file name is 'file.tar.snz' and the original file is deleted.
  Timestamp, mode and permissions are not changed as possible as it can.

To compress file.tar and output to standard out.
 snzip -c file.tar > file.tar.snz
or
 cat file.tar | snzip > file.tar.snz

To uncompress file.tar.snz:

 snzip -d file.tar.snz
or
 snunzip file.tar.snz

  Uncompressed file name is 'file.tar' and the original file is deleted.
  Timestamp, mode and permissions are not changed as possible as it can.

  If the program name includes 'un' such as snunzip, it acts as '-d' is set.

To uncompress file.tar.snz and output to standard out.

 snzip -dc file.tar.snz > file.tar
 snunzip -c file.tar.snz > file.tar
 snzcat file.tar.snz > file.tar
 cat file.tar.snz | snzcat > file.tar

  If the program name includes 'cat' such as snzcat, it acts as '-dc' is set.

It have been tested on Linux and will work on other unix-like OSs.
As for Windows, it needs a getopt(3) compatible function, which is found in 
many places as a public domain function.

Original comment by [email protected] on 31 Jul 2011 at 12:12

Attachments:

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Sorry, I failed to attach a correct file.
I attached a new one.

Original comment by [email protected] on 31 Jul 2011 at 12:16

Attachments:

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
kubo your patch seems to work well; i did have to make one change for missing 
'PACKAGE_STRING' and it was not being compiled correctly by default when i do 
'make snzip', but the utility is exactly what i was looking for. I've also 
added a -v to print out the version 1.0.3

Original comment by jehiah on 12 Aug 2011 at 7:52

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
I made a new patch to support mingw32 and cygwin.

> kubo your patch seems to work well; i did have to make one change for missing 
'PACKAGE_STRING' and it was not being compiled correctly by default when i do 
'make snzip', but the utility is exactly what i was looking for. I've also 
added a -v to print out the version 1.0.3

The missing macro 'PACKAGE_STRING' is defined in config.h by autoconf.
What version of autoconf do you use? I'm using autoconf 2.65.

I also prefer the '-v' option to print out the version. But gzip and bzip2
use it for verbose output option. So I didn't add it.

Original comment by [email protected] on 21 Aug 2011 at 9:37

Attachments:

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
>> i did have to make one change for missing 'PACKAGE_STRING' and it was not 
being compiled correctly by default when i do 'make snzip'

Could you provide your changes?

Original comment by [email protected] on 22 Sep 2011 at 2:31

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
guys im a litle confused here, shouldnt the download of snappy.h allow you to 
simply run this command:

snappy::Compress('/tmp/testfileinput', '/tmp/testfileoutput');


from within your c++ code? just two simple string inputs?

Original comment by [email protected] on 24 Sep 2011 at 5:00

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
@mina.moussa : 

snappy::Compress reads the full input, performs compression, writes the full 
output.

imagine you have 5 TB of data to compress ... what do you do? well, you can buy 
lots of ram and harddisks to swap to while the compression happens. 

or better yet you can write a loop that reads in chunks of the file, runs them 
through snappy::Compress and writes each chunk to an output file with a 
container format that can later be decompressed by reading in discrete chunks 
and decompressing them. 

though i haven't played with these command line tools, if they behave properly 
they should allow you to avoid having to come up with a container file format 
and avoid writing loops for working on small chunks of the input at a time by 
enabling streaming of input to the tool which would stream 
compressed/decompressed output.

Original comment by [email protected] on 26 Sep 2011 at 2:49

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Yes, this is probably what you'd want for a command-line tool supporting pipes: 
A simple framing format. For each block, probably the compressed length (the 
uncompressed length is already in the format), perhaps some flags (EOF?), and 
the CRC32c of the uncompressed data in that block.

Original comment by [email protected] on 26 Sep 2011 at 2:54

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
We have a simple framing format for streaming in the Java port of Snappy:

https://github.com/dain/snappy/blob/master/src/main/java/org/iq80/snappy/SnappyO
utputStream.java

Each 32k block is preceded by a 3-byte header, which is a 1-byte flag 
indicating if the block is compressed or not, and a 2-byte length of the block.

Our main requirements were speed and the ability to concatenate compressed 
files.  The gzip format allows concatenation, but the common Java libraries 
don't support this.  We avoided writing a checksum for simplicity and speed.  
The format doesn't currently have a header (magic number), but using a whole 
byte for the compressed flag allows adding one later.

It would be nice to have a standard streaming format and tools.  We're going to 
try to get the Hadoop project to use this format too (which is our primary use 
case).

Original comment by electrum on 26 Sep 2011 at 5:24

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
The ability to concatenate is an interesting feature. Something that would 
combine this with the ability to detect file format would be the best, though, 
so you won't need yet another container format for that.

Not doing checksumming sounds a bit suboptimal; you can do it really cheaply on 
modern CPUs (gigabytes per second per core), especially since the data is 
already going to be in the L1 cache. Especially with multiple implementations 
starting to float around (Java vs. C++ vs. Go), it's easy to get something 
subtle going wrong.

Original comment by [email protected] on 28 Sep 2011 at 10:28

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Steinar, you have a good point about checksums.

We updated the stream format to contain the masked CRC32C of the input data, 
providing protection against corruption or a buggy implementation.  We also 
added a  file header "snappy\0", which happens to be the same size (7 bytes) as 
the block header.  The file header may procede any block header one or more 
times, thus supporting concatenation including "empty" files (that contain only 
the file header).

See the SnappyOutputStream link above for the formal description.  Does this 
format sound reasonable to standardize?

Original comment by electrum on 30 Sep 2011 at 8:32

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
OK, this starts to sound pretty good to me -- I should probably get somebody 
else in here to look at it as well, but it starts to become reasonable.

Some questions (mostly nits):

 - What do you need the uncompressed/compressed flag for? In what situations would you want to store the data uncompressed?
 - Is the length 16-bit signed or unsigned? Why is it 32768 and not 32767 or 65535?
 - Should the lengths really be stored big-endian, when all other numbers in Snappy are stored little-endian?
 - Can you verify that the CRC32c polynomial you're using is compatible with what the SSE4 CRC32 instruction computes? It sounds reasonable that if we're defining a new format, an implementation in native code should be able to make use of that instruction.

Thanks!

Original comment by [email protected] on 3 Oct 2011 at 9:50

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Some drive-by comments:

For the uncompressed/compressed flag, leveldb's tables uses snappy, but if the 
compression doesn't save more than 12.5% of the bytes, then the block is left 
uncompressed on disk:
http://code.google.com/p/leveldb/source/browse/table/table_builder.cc#147

For checksums, it looks like github.com/dain is using the same CRC32c-based 
checksum as leveldb:
https://github.com/dain/snappy/blob/master/src/main/java/org/iq80/snappy/Crc32C.
java
http://code.google.com/p/leveldb/source/browse/util/crc32c.h#28

Original comment by [email protected] on 3 Oct 2011 at 10:32

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Here is a concrete proposal. It is possibly too complicated, but it does let a 
.snappy file start with a 7-byte magic header, and also allows concatenating 
multiple .snappy files together.

The byte stream is a series of frames, and each frame has a header and a body. 
The header is always 7 bytes. The body has variable length, in the range [0, 
65535].

The first header byte is flags:
  - bit 0 is comment,
  - bit 1 is compressed,
  - bit 2 is meta,
  - bits 3-7 are unused.

The comment bit means that the rest of the header is ignored (including any 
other flag bits), and the body has zero length. Thus, "sNaPpY\x00" is a valid 
comment header, since 's' is 0x73.

For non-comment headers, the remaining 6 bytes form a uint16 followed by a 
uint32, both little-endian. The uint16 is the body length. The uint32 is a 
CRC32c checksum, the same as used by leveldb. This differs from the Java code 
linked to above in that it's little-endian (like the rest of Snappy), and the 
maximum body length is 65535, not 32768.

The compressed bit means that the body is Snappy-compressed, and that the body 
length and checksum refer to the compressed bytes. If the bit is off, the body 
is uncompressed, and the body length and checksum refer to the uncompressed 
bytes. Each frame's compression is independent of any other frame.

The meta bit means that the body is metadata, and not part of the data stream. 
This is a file format extension mechanism, but there are no recognized 
extensions at this time.

A conforming decoder can simply skip every frame with the comment or meta bits 
set.

Original comment by [email protected] on 4 Oct 2011 at 11:03

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
I've written a Go implementation of that proposal at 
http://codereview.appspot.com/5167058. It could probably do with a few more 
comments, but as it is, it's about 250 lines of code.

I added an additional restriction that both the compressed and uncompressed 
lengths of a frame body have to be < 65536, not just the compressed length. 
This restriction means that I can allocate all my buffers up front. Thus, once 
I've started decoding, I don't need to do any extra mallocs regardless of how 
long the stream is, or whether the uncompressed stream data looks like 
"AAAAAAAA...".

Original comment by [email protected] on 4 Oct 2011 at 1:15

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Answers to Steinar's questions:

Why the uncompressed/compressed flag?  As mentioned above by Nigel, for the 
same reason that leveldb does it.  Because Snappy's goal is speed, and doesn't 
compress well compared to slower algorithms like zlib, it makes sense to 
sacrifice a little more space for speed.  (We chose the same cutoff as leveldb, 
12.5%, but the cutoff is independent of the format.)

The 16-bit length is unsigned.  Why 32768 and not 65535?  Two reasons.  First, 
it matches Snappy's internal block size.  Because Snappy will split larger 
blocks, the only potential gain is fewer chunk headers.  Second, it is a power 
of two.  If you use 65535 and compress 64k (65536) bytes of data, then you end 
up with two chunks, with the second chunk being only 1 byte.

Should the length be big endian or little endian?  We chose big endian because 
that's common for file formats and network protocols.  Given that Snappy uses 
little endian, I have no objections to changing it.

The CRC32C was chosen specifically to be compatible with the SSE4 instruction.  
It's a bug if it's not.  The Java implementation uses the CRC32C code from 
Hadoop, which we haven't verified extensively, but it matched in cursory checks 
against the Python leveldb reader.

Original comment by electrum on 5 Oct 2011 at 6:03

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Nigel, I'm curious why you have selected a bit-flag encoding for the header 
byte instead of an enumeration of values.  I like bit-flags when the most of 
combinations are valid, but in the three flags identified, comment and meta 
would not combine well with each other or with compressed.  Alternatively, I 
propose we use the following explicit enumeration for the header bit:

  0x00: uncompressed data
  0x01: snappy compressed data
  0x73: stream header 

If the code is 0x73 (ascii 's') then the frame header block must be exactly 
"snappy\0".  All other codes are reserved.  This large reserved space allows 
for easy extension of the file format in the future.

I also suggest, we require the stream header at the beginning of the file 
instead of making it optional.

Original comment by [email protected] on 5 Oct 2011 at 6:21

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Regarding the checksum, I thought Steinar had a very good point about the 
checksum protecting against bad encoders/decoders.  Thus, the checksum should 
always be of the original data, providing end-to-end protection for the user's 
data.

Original comment by electrum on 5 Oct 2011 at 6:51

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
I'm happy to go with 32768 instead of 65535, given that kBlockSize == 32768 in 
C++.

Reserving all codes other than 0x00, 0x01 and 0x73 could work, but 
extension/metadata frames can have bodies, and bodies can also be compressed or 
uncompressed. I think it's just as easy to make meta-ness a bitfield bit, 
orthogonal to compression being a bitfield bit. An earlier (unpublished) design 
also had a meta-continue bit, in case the metadata's body was longer than 65535 
bytes, but I decided to leave that out until we actually have metadata to 
specify. Since I had three bits, then I figured that comment might as well be a 
bit too. Sure, not every bit combination is valid, but a lot of the bits are 
orthogonal.

Regarding the magic string, the weird capitalization of "sNaPpY\x00" is 
deliberate, to lessen the chance for a false positive. Also, I'm still leaning 
towards optional instead of mandatory, but I could be convinced otherwise.

Regarding the checksum, leveldb computes the checksum of the compressed bytes, 
not the uncompressed bytes. I don't know the reason for that, but I'm guessing 
that it was a deliberate decision. I'll ask.

Original comment by [email protected] on 6 Oct 2011 at 12:51

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Actually, it's pretty superficial, but what really bugs me is how the 0x73 
sticks out from everything else. What if we made the magic header "\x00sNaPpY", 
with the nul byte at the fromt. Thus:

0x00 stream header - the remaining six bytes must be "sNaPpY".
0x01 uncompressed body
0x02 compressed body

Anything else is reserved.

For anything header not starting with 0x00, and the remaining six bytes are a 
2-byte body length (up to 32768) and a 4-byte checksum (of the body's bytes on 
the wire, i.e. compressed).

Original comment by [email protected] on 6 Oct 2011 at 1:57

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
I forgot to say that I chose to checksum the compressed bytes because the body 
for an extension frame may or may not be compressed, but it's unspecified how 
that is indicated by the opening header byte, and so a version 1.0 decoder 
won't know when to decompress the body, if it needed to checksum the 
uncompressed bytes.

Original comment by [email protected] on 6 Oct 2011 at 3:04

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
What do you mean by version 1.0 decoder? Is there already an established format 
that we need to be backwards compatible with?

The two primary reasons I can see for making the checksum on the uncompressed 
data are:

 1. Wider protection; you'll guard against not only implementation errors, but also bit-flip errors during compression.
 2. You can run it in parallel with the compression if you need to.

I'm not sure if a maximum _compressed_ size of 32768 bytes makes sense; that 
essentially _forces_ the compressor to add logic to copy the uncompressed data 
if it didn't compress, which is an extra copy you don't want. If you really 
can't live with 65535 as the limit, I'd suggest MaxCompressedLength(32768) = 
38261.

Original comment by [email protected] on 6 Oct 2011 at 9:34

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
There is no established format that we need to be backwards compatible with. By 
"version 1.0 decoder", I mean whatever we decide to do here.

What we have been discussing in the last dozen or so comments on this bug is a 
format that allows for extensions, but does not define any. If we decide to add 
an extension in the future (e.g. speeding up random seeks into a .snappy file), 
I'd simply like to ensure that any such extension won't break the decoding 
algorithm we decide upon here.


As for 'forcing' the compressor to add copy logic, I don't think it's 
problematic. The compression code can't compress directly to a sink because it 
needs to precede the data with a header saying how many data bytes to expect, 
and you don't know the length until after you've done the compression. Thus, it 
needs to compress to a buffer in memory.

The source (uncompressed) bytes are also already buffered in memory. 
Compression does not modify the source bytes. Thus, being able to write an 
uncompressed frame simply requires being able to choose which buffer to copy to 
the sink. I don't think that there's any unnecessary copying.

Original comment by [email protected] on 6 Oct 2011 at 12:04

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
> You can run it in parallel with the compression if you need to.

Symmetrically, if the checksum is over the _compressed_ bytes, you can run it 
in parallel with the decompression.

But my not-based-on-any-experiments expectation is that I/O bandwidth would be 
the bottleneck in practice, and the time spent on checksumming would be 
relatively insignificant.

Original comment by [email protected] on 6 Oct 2011 at 12:11

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Here's another thinking-out-loud comment.

If you don't see the need to send uncompressed frames, and you limit the 
compressed body to 40960 bytes (0xA000), you can shorten the header to six 
bytes. The first two bytes form a little-endian uint16.

If that uint16 is 65535, the remaining four bytes must be "sNpY", and the body 
has zero length. The stream must start with at least one of these frames.

If that uint16 is in [0, 40960], that uint16 is the length of the compressed 
body, and the remaining four bytes is the little-endian checksum.

If that uint16 is in [40961, 65534], then this is an extension. The remaining 
four bytes (as a little-endian uint32) is the body length: the number of bytes 
to skip if the extension is unrecognized. If an extension uses a checksum, that 
checksum is given in the frame body instead of the frame header.

Original comment by [email protected] on 7 Oct 2011 at 1:23

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
> limit the compressed body to 40960 bytes

I forgot to also say that the uncompressed body is at most 32768 bytes.

Original comment by [email protected] on 7 Oct 2011 at 1:34

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
FYI.

> I forgot to also say that the uncompressed body is at most 32768 bytes.

I tested the uncompression speeds for various uncompressed body sizes (such as 
8k, 16k, 32k, 64k, ...) by using snzip.
The best size was 64k on my Linux box. The test data was uncompressed linux 
kernel tarball.

The best size will depends on the hardware spec. I guess it depends on the
ratio of I/O bandwidth and CPU speed.

Original comment by [email protected] on 7 Oct 2011 at 3:24

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
proposal based on yours:
 * frame header is six bytes
 * uint16 fh0 is first two bytes (little endian)
 * uint32 fh1 is next four bytes (little endian)

if PREDICT_FALSE(fh0 & 0xc000) { /* special frame */
  if PREDICT_FALSE(fh0 == 65535) {
    /* stream header; fh1 must contain "sNpY" */
  }
  else {
    /* fh1 is length of frame body [0..2^32)
       fh0 == 65534: frame body is uncompressed data
       fh0 == 65533: frame body offset 0 .. (fh1 - 4) is uncompressed data.
                     frame body offset (fh1 - 4) .. fh1 is crc32c of the data.
       40960 <= fh0 <= 65532: extension
  }
}
else { /* not a special frame; is a compressed data frame */
  /* compressed_length = fh0
     fh1 contains crc32c of uncompressed version of the data */
}

Original comment by [email protected] on 8 Oct 2011 at 12:33

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
correction:
if PREDICT_FALSE(fh0 >= 0xc000) { /* special frame */

Original comment by [email protected] on 8 Oct 2011 at 12:46

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
arggh, second correction; just can't get that line right:
if PREDICT_FALSE(fh0 >= 0xa000) { /* special frame */

Original comment by [email protected] on 8 Oct 2011 at 12:56

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
I have released snzip.
https://github.com/kubo/snzip

This is basically same with snzip posted at comment 8.
The difference is that this is written by C, not by C++.

I don't stick to the current snzip format.
I released it to test various formats.

Original comment by [email protected] on 8 Oct 2011 at 8:24

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
snzip was updated to support file formats of snappy-java and Dain's snappy in 
java.
https://github.com/kubo/snzip

I may add the stream format proposed at comment 32 after a week.

Original comment by [email protected] on 9 Oct 2011 at 9:09

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
I still prefer the proposal in comment #24 over the ones in #29 or #32. I still 
don't see the problem in allowing uncompressed frame bodies, and in #32, I 
don't like how the checksum for uncompressed frames is in different places (if 
present at all) compared to the checksum for compresed frames.

For me, #24 is still the most regular: 1 byte flags, 2 byte length, 4 byte 
checksum.

Original comment by [email protected] on 10 Oct 2011 at 6:39

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
I have stopped to add the format proposed at comment #32 to snzip.
I make another proposal. It is simple and extensible.

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119.

= Overview

A stream is a sequence of frames. A frame consists of frame type (1
byte), data length (2 bytes, little endian) and data. Frame types are
defined as follows:

frame type:

  0x00         stream header. The data MUST be "snappy".
  0x01         compressed data frame
  0x02         uncompressed data frame
  0x03         end of stream
  0x04 - 0x3F  reserved
  0x40 - 0x7F  implementation-specific types

  0x80         implementation name, such as "snzip 0.0.3."
  0x81         comment
  0x82         checksum (CRC-32)
  0x83         checksum (CRC-32C)
  0x84 - 0xBF  reserved
  0xC0 - 0xFF  implementation-specific types

An implementation MUST support 0x00 - 0x03, SHOULD support 0x82 and MAY
support other types. If it finds an unsupported type, it SHOULD stop
reading the stream when "(type & 0x80) == 0" and SHOULD ignore it when
"(type & 0x80) != 0".

The data length before compression MUST be less than or equal to 32k.

= Simple case

A stream MUST start with a stream header frame "\x00\x06\x00snappy" and
end with a end-of-stream frame "\x03\x00\x00". A simple stream without
checksum is:

Example 1: simple stream without checksum

  Frame 1:  "\x00\x06\x00snappy" (stream header)
  Frame 2:  compressed or uncompressed data frame
          ....
  Frame 100:  compressed or uncompressed data frame
  Frame 101:  "\x03\x00\x00" (end-of-stream)

= Checksum

I choose CRC-32 for checksum because CRC-32 is widely used and Java
supports it as a standard class java.util.zip.CRC32. You can use
CRC-32C as a option if you want to use a new instruction CRC32 in
SSE4.2 to speed up. Note that the checksum data may be ignored if a
reader doesn't support it.

To use checksum (CRC-32), a checksum start frame "\x82\x00\x00" SHOULD
be just after the stream header frame and a checksum data frame
"\x82\x04\x00" + (little endian 4-byte data) SHOULD be just before the
end-of-stream frame.

The checksum data MUST be calculated from all bytes in frames between
the previous nearest checksum start frame and a frame just before the
checksum data frame.

Example 2: A checksum for a stream

  Frame 1:  "\x00\x06\x00snappy" (stream header)
  Frame 2:  "\x82\x00\x00" (checksum start)
  Frame 3:  compressed or uncompressed data frame
          ....
  Frame 101:  compressed or uncompressed data frame
  Frame 102:  "\x82\x04\x00" + checksum of Frame 2 to Frame 101
  Frame 103:  "\x03\x00\x00" (end-of-stream)

If a reader implementation support the checksum type, it MUST start
checksum from the first checksum start frame, update the checksum
value for each frame and compare it with the value in a checksum data
frame.

Any number of checksum data frames MAY be inserted after a checksum
start frame. Any number of checksum start frames MAY be inserted any
places.

If a second or succeeding checksum start frame is found, the checksum
value MUST be reset.

Example 3: checksum for each frame

  Frame 1:  "\x00\x06\x00snappy" (stream header)
  Frame 2:  "\x82\x00\x00" (checksum start)
  Frame 3:  compressed or uncompressed data frame
  Frame 4:  "\x82\x04\x00" + checksum of Frame 2 and Frame 3
  Frame 5:  compressed or uncompressed data frame
  Frame 6:  "\x82\x04\x00" + checksum of Frame 2 to Frame 5
          ....
  Frame 101:  compressed or uncompressed data frame
  Frame 102:  "\x82\x04\x00" + checksum of Frame 2 to Frame 101
  Frame 103:  "\x03\x00\x00" (end-of-stream)

Example 3: Reset checksum value after each checksum data

  Frame 1:  "\x00\x06\x00snappy" (stream header)
  Frame 2:  "\x82\x00\x00" (checksum start)
  Frame 3:  compressed or uncompressed data frame
  Frame 4:  "\x82\x04\x00" + checksum of Frame 2 and Frame 3
  Frame 5:  "\x82\x00\x00" (checksum start)
  Frame 6:  compressed or uncompressed data frame
  Frame 7:  "\x82\x04\x00" + checksum of Frame 5 and Frame 6
          ....
  Frame 100:  "\x82\x00\x00" (checksum start)
  Frame 101:  compressed or uncompressed data frame
  Frame 102:  "\x82\x04\x00" + checksum of Frame 100 and Frame 101
  Frame 103:  "\x03\x00\x00" (end-of-stream)

This checksum scheme is optimized for "Example 2."

= Implementation-specific frame types

0x40 - 0x7F and 0xC0 - 0xFF are freely used by implementations.
If one of them is included in a stream, an implementation name frame
(0x80) SHOULD be in the stream.

The type number SHOULD be between 0x40 and 0x7F if the frame data is
necessary to decode the stream, such as an encryption key.
The type number SHOULD be between 0xC0 and 0xFF if the frame data is
dispensable to decode the stream, such as a timestamp.

Original comment by [email protected] on 18 Oct 2011 at 1:06

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
I'm not sure if I can agree this would be "simple". For instance, the support 
for two different checksums to please a given compressor implementation seems 
awfully complex to me.

I'm also not sure if we need to standardize comments or creators or multi-block 
checksums separate from the blocks itself; what's the real-world use case for 
this? The two useful real-world use cases I know of currently that need a 
framing format like this (outside of Google, where we already have other 
solutions in place) is “pipe through SSH” and Hadoop's usage. If we can 
make something simple that cover these reasonably efficiently, and keep some 
extensibility, that would probably be the best.

I agree, however, that 0x00 for the stream header is the most elegant. So 
here's my proposal:

0x00 - header (as in your proposal; must be "\x00\x06\x00snappy")
0x01 - compressed block (max 32768 bytes uncompressed data, max 65531 bytes 
compressed data)
0x02 - uncompressed block (max 32768 bytes data)
0x03-0x7f - reserved, fatal errors for 1.0 decoders
0x80-0xff - reserved, skippable by 1.0 decoders

All blocks have a little-endian two-byte length. Compressed and uncompressed 
blocks both begin with the CRC32c of the uncompressed data (this is why the 
0x01 block is max 65531 and not 65535).

There is explicitly no EOF marker, to make concatenation simple.

I think this should cover all the use cases I've seen presented so far, with 
the minimal amount of complexity (and it should be very close to what Hadoop 
already has implemented, as far as I understand). If snzip wants a block for 
its own metadata use (comments, creator, etc.) I'd be happy to allocate 0x80 to 
them for further sub-specification, which they can use for whatever they want.

Original comment by [email protected] on 18 Oct 2011 at 1:26

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
I still think sNaPpY is better because it better facilitates something like 
Boyer-Moore for efficiently locking onto those envelopes if we were to use this 
for high availability streaming projects.

Also, you can peek 2 bytes from a stream (via get, peek, unget) to get a 2- 
byte magic number.  How distinguishing is \x00\x06 relative to other file 
formats?  What does file/libmagic say?

Original comment by [email protected] on 19 Oct 2011 at 3:31

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
The proposal in comment #39 sounds good to me. My one complaint is that I would 
change "snappy" to "sNaPpY".

As for a 9-byte magic header, I think it's just as good as PNG's 8-byte magic 
header.

Original comment by [email protected] on 19 Oct 2011 at 8:38

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
I can change to sNaPpY if people want; I don't see the big win, but it's not a 
big loss either.

The classical magic number is four bytes long; two is not going to be unique 
almost no matter what you do. Unfortunately 0x00 0x06 is reserved as “TTComp 
archive data” in magic(5). How about taking 0xff instead of 0x00? That 
doesn't seem to match anything, and fits nicely in with “everything 0x80-0xff 
is skippable”. (0x80 is taken for “8086 relocatable (Microsoft)”.)

So:

0x00 - compressed block
0x01 - uncompressed block
0x02-0x7f - reserved, unskippable
0x80-0xfe - reserved, skippable
0xff - header

I can write up a semi-formal spec for this and stick it in the archive if 
people want.

Original comment by [email protected] on 19 Oct 2011 at 10:02

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
One suggestion to the proposal in comment #42.
We need a EOF marker block.
If a compressed file is accidentally truncated exactly at the end of a
block, we cannot detect the truncation without the EOF marker block.

Original comment by [email protected] on 19 Oct 2011 at 1:00

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Hi,

We've resolved the EOF issues in seperate mail thread. I've attached my current 
draft of the tentative spec.

There may or may not be an official stream compressor in the future, but it 
will not be part of the first commit.

Original comment by [email protected] on 25 Oct 2011 at 10:51

Attachments:

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Though I surely said that I agreed with you if the format was designed as a 
network protocol, I don't agree as a file format.
But anyway I close my eyes to the issue. My requirements and yours are 
different.

I just want to make sure one thing.
Does the spec use CRC-32C checksum defined by rfc3720 section B.4?
Otherwise, does it use masked values as "Snappy written in pure java"(*1)?
*1 
https://github.com/dain/snappy/blob/master/src/main/java/org/iq80/snappy/Crc32C.
java
I guess the former because it just says CRC-32C.

Well, one more thing.
What is the standard file extension name?
 gzip -> .gz
 bzip2 -> .bz2
 snappy -> .snappy???

Original comment by [email protected] on 25 Oct 2011 at 12:21

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
We should find a standard reference for CRC-32C, yes. The iSCSI RFC you linked 
to might be the authoritative reference?

We should use masked values, as you say. I'll update.

If people are happy with using a longer-than-three-character extension, .snappy 
would be fine by me.

Original comment by [email protected] on 25 Oct 2011 at 12:32

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Updated with CRC-32C reference and masking. (It is okay to use the same masking 
constants as others, right?)

Original comment by [email protected] on 25 Oct 2011 at 1:00

Attachments:

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Looks good to me.  

I also think an EOF frame would be useful for detecting truncated streams (a 
problem we are having right now).  In the case of a concatenated file, the only 
legal frame after an EOF frame would be the stream identifier frame, and the 
other way around.  This would make the decoder a bit more stateful, but I think 
the benefit of detecting truncated stream outweighs this annoyance.

One final thing, I think we should formally agree on the value of the http 
Accept-Encoding header.  I'd go with just "snappy" here, but don't have a 
strong preference.

Original comment by [email protected] on 26 Oct 2011 at 12:08

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
I suggest using 0xfe for the EOF marker since it's next to 0xff.

Original comment by electrum on 26 Oct 2011 at 12:13

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Which http Accept-Encoding header? Are people really proposing to snappy-encode 
HTTP requests? (Why?)

Original comment by [email protected] on 26 Oct 2011 at 1:36

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
I am considering it.  When I already have snappy compressed data in my server, 
I'd like to send it directly to clients who can handle the encoding.  

More generally, I think the same analyses that leads someone to choose snappy 
over gzip or uncompressed data could lead to a decision to use it over http.

Original comment by [email protected] on 26 Oct 2011 at 4:25

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
One suggestion: would it make sense to allow use of stand-alone header marker 
as EOF as well (possibly with a modification to make sure it can be detected as 
EOF)?
While longer than a single byte, it would not require reserving more bytes, and 
handling is likely to be simple as in-stream markers need to be supported 
anyway.

I also agree with Dain in that one definitely would want to allow use of Snappy 
similar to gzip in all use cases, including compressing HTTP request payload 
(POSTs).

Original comment by [email protected] on 26 Oct 2011 at 5:30

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
I'm unaware if there's an RFC procedure to follow for this, but OK, I agree it 
could be useful, although I do not believe it would be supported in any major 
browser. In any case, you'd probably want to use e.g. “snappy-framed” to 
clearly distinguish it from raw Snappy data. Maybe x-snappy-framed?

Original comment by [email protected] on 2 Nov 2011 at 11:41

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
bump; there has been a lot of good discussion about streaming formats. Where 
does this stand with getting a command line tool merged in? what's left to do?

I think decisions/discussions about a encoding headers to use for HTTP requests 
with a snappy payload are out of scope of this req.

Original comment by jehiah on 7 Dec 2011 at 4:08

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
The streaming format itself is in internal review, and will hopefully enter the 
repository soon.

With regards to an actual command-line compressor, there are currently no plans 
to have one in the standard tree, but having a streaming format in place should 
make it easier to develop one out-of-tree for those who would wish so.

Original comment by [email protected] on 7 Dec 2011 at 9:16

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
thanks for the clarification/update. I understand the streaming format helps 
support having a command line utility, but it wasn't clear to me that the goal 
for this issue had changed from having that utility in this repo to only having 
the streaming support.


Original comment by jehiah on 7 Dec 2011 at 3:54

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Hi,

r54 contains the framing format spec. It's largely what I posted here earlier, 
but with some minor clarifications etc. that showed up in internal review.

Original comment by [email protected] on 4 Jan 2012 at 10:47

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
jehiah: I guess the goal for the issue remains the same for the reporter, so 
it's not going to be closed just because we have a streaming format. It would, 
however, probably be closed if an appropriate out-of-tree compressor appeared. 
(It could also be closed as WontFix if we permanently decide for some reason 
that we don't want to do this.)

Original comment by [email protected] on 4 Jan 2012 at 4:55

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Can the standard be updated to include an EOF chunk (type 0xfe), per comment 
#48?

Original comment by electrum on 9 Feb 2012 at 7:31

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
This has already been extensively discussed. The answer is that we've decided 
not to make an EOF chunk.

Original comment by [email protected] on 9 Feb 2012 at 7:37

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Implementation planned in C++ for the streaming format?

Original comment by [email protected] on 1 Mar 2012 at 6:36

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Currently none, sorry.

Original comment by [email protected] on 1 Mar 2012 at 6:43

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
The currently defined framing format has two major inefficiencies:  The 4 byte 
checksum is stored for each block, rather than for a larger stream like in 
other compressed formats.  And the checksum is stored before the data, 
requiring the compressor to hold back data or rewind the output stream to store 
it.  To correct these I propose the following new chunk types:

4.4 Compressed data without checksum (Chunk type 0x02)

Like 0x00 but without the checksum

4.5 Uncompressed data without checksum (Chunk type 0x03)

Like 0x01 but without the checksum

4.6 Checksum so far (Chunk type 0x80)

Masked CRC-32C of decompressed data of all chunks (not including headers) since 
(but not including) the last chunk to store a CRC-32C checksum (currently types 
0x00, 0x01 and 0x80), but since only the latest type 0xFF chunk (inclusive).  
Thus in no case will an implementation need more than one running CRC-32C state 
per stream.

4.7 Cryptographich hash begin (Chunk type 0x81)

This stores the DER encoded OID-based algorithm identifier of a cryptographic 
hash algorithm to be applied to the decompressed data of this and all 
subsequent chunks in addition to CRC-32C.  If present, this SHOULD be right 
after a type 0xFF or 0x82 chunk, but may not be if a hashed stream is 
concatenated to a non-hashed stream.

4.8 Digital signature so far (Chunk type 0x82)

A DER encoded detached PKCS#7 signature of decompressed data of all chunks (not 
including headers) since (but not including) the last chunk to store such a 
signature (currently only type 0x82) or the last type 0x81 chunk (inclusive), 
whichever is later.  Certificate trust requirements is up to the recipient.  
The use of counter-signature "unathenticated attributes" is allowed.  The data 
hash signed by
the signature must be the one specified in the most recent preceding type 0x81 
chunk.  Chunk type 0x82 MUST NOT occur without a preceding chunk type 0x81.
These cryptographic concepts are all specified elsewhere.  This chunk SHOULD be 
placed after any type 0x80 chunk if both are present.

Example stream 1:

0xFF stream identifier    Magic string is fed to CRC-32C
0x02 compressed chunk     Decompressed data fed to CRC-32C
0x02 compressed chunk     Decompressed data fed to CRC-32C
0x03 uncompressed chunk   Data is fed to CRC-32C
0x02 compressed chunk     Decompressed data fed to CRC-32C
0x80 CRC-32C chunk covering the decompressed data
                          CRC-32C is then reset

Example stream 2:
0xFF stream identifier    Magic string is fed to CRC-32C
0x02 compressed chunk     Decompressed data fed to CRC-32C
0x02 compressed chunk     Decompressed data fed to CRC-32C
0x03 uncompressed chunk   Data is fed to CRC-32C
0x02 compressed chunk     Decompressed data fed to CRC-32C
(CRC-32C not used or checked)

Example stream 3:
0xFF stream identified    Magic string is fed to CRC-32C
0x81 hash identifier      OID is fed to CRC-32C and hash
0x02 compressed chunk     Decompressed data fed to CRC-32C and hash
0x02 compressed chunk     Decompressed data fed to CRC-32C and hash
0x03 uncompressed chunk   Data is fed to CRC-32C and hash
0x02 compressed chunk     Decompressed data fed to CRC-32C and hash
0x80 CRC-32C chunk covering the decompressed data
                          CRC-32C is then reset
0x82 digital signature covering all but the stream identified

This has the following properties:

1. It is backwards compatible with the old stream format
2. Old stream readers will see the unsupported 0x02 or 0x03 chunks and stop
3. Streams can be trivially concatenated regardless of version
4. Both CPU and size overhead is smaller because checksum masking and reinit is 
done only once for a typical stream
5. In a typical compressor, the input to CRC-32c will be the magic string 
followed by the input data, allowing a completely parallel calculation 
independent of the snappy blocking and framing.
6. In a typical decompressor, the CRC-32c can be run in parallel to outputting 
the data on the fly, objecting after the fact.
7. Except for the 32K buffering for the old type 0x00 and 0x01 chunks, there is 
no need to buffer data just for the benefit of checksumming.  And a compressor 
only needs to do this if it can be configured to produce the old format.
8. A pure hardware CRC-32c (such undoubtedly exist as stock IC design blocks) 
can be easily used if extreme hardware acceleration beyond CRC-offload 
instructions is needed.

Original comment by [email protected] on 7 May 2012 at 11:50

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
I'm not sure if I agree with four checksum bytes per 32 kB block being a 
“major inefficiency”; that's 0.01% overhead. If you care about that sort of 
thing, you probably should not use Snappy, or at least not a framing format.

As for your other additions, I don't think the ability to digitally sign a 
Snappy file is in-scope for this bug, and I'm highly reluctant to create yet 
another way of signing files in the world. If you really have a use case for 
this, please open a separate bug, but be aware that it's quite likely to be 
closed with “won't fix”, especially as it does not look like anyone will 
write a command-line tool at all in the short term, let alone one with 
cryptographic capabilities.

Original comment by [email protected] on 7 May 2012 at 11:58

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
I've taken a shot at implementing the framing protocol for python-snappy.

My implementation is close to a drop-in replacement for python-zlib's 
compressobj/decompressobj interface.

Whether or not this commit gets merged into mainline python-snappy, perhaps 
someone may be interested in what I have here?
https://github.com/jtolds/python-snappy/commit/7f304a6fc96f6936fc0192932ea025aeb
2b4b9c6

Original comment by jtolds on 8 Nov 2012 at 6:01

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Heh, rereading the comments here I realized we probably want 
https://github.com/jtolds/python-snappy/commit/5a8660198cffc5230b2ee99e1102e8128
cc61f71 too

Original comment by jtolds on 8 Nov 2012 at 6:31

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Nice; do you know if there's a command-line client that uses 
compressobj/decompressobj? That would take this bug a long way towards 
completion.

Original comment by [email protected] on 9 Nov 2012 at 1:14

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
no but it would be super easy to whip up. i haven't heard from the 
python-snappy maintainer at all about getting my changes merged in though.

Original comment by jtolds on 10 Dec 2012 at 9:54

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
whipped up: 
https://github.com/jtolds/python-snappy/commit/66211460734475f2076efff45e79ab3ec
dfadb84

Original comment by jtolds on 10 Dec 2012 at 11:12

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
also, i guess i need to star issues to get emailed followup comments. sorry for 
the turnaround time on that comment, but starred now

Original comment by jtolds on 11 Dec 2012 at 12:01

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
I haven't tested it, just a small comment; you probably want 32 kB block size, 
not 16 kB. Snappy works by default in 32 kB blocks.

Original comment by [email protected] on 23 Dec 2012 at 11:17

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
oh you're totally right, rookie mistake. i even briefly thought about it. 
"should it be 32kb? no, the length includes the checksum" which ends up 
actually kind of being a non-sequitur reason

https://github.com/jtolds/python-snappy/commit/f14a2187bf48dd34001ccc74588c8ec81
16f548a

Original comment by jtolds on 23 Dec 2012 at 7:07

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Hi guys,

First the good news: Snappy now compresses about 3% denser! Then the bad news: 
That change necessitated a change to the framing format (3-byte offsets instead 
of 2-byte).

jtolds: I'm afraid you'll need to change your implementation :-) Note that the 
stream identifier has changed as a side effect, so you won't need to worry 
about old streams being confused with the new format.

Original comment by [email protected] on 18 Jan 2013 at 12:18

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
nice, will update shortly

Original comment by jtolds on 23 Jan 2013 at 11:07

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
https://github.com/jtolds/python-snappy/commit/50ea5ab816f3830a1194271bfec406d35
18eefe9

Original comment by jtolds on 9 Feb 2013 at 12:37

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
python-snappy 0.5 now implements the latest framing format 
(http://code.google.com/p/snappy/source/browse/trunk/framing_format.txt?spec=svn
68&r=71)

Original comment by jtolds on 20 Feb 2013 at 12:11

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
i'm not sure if anyone else has implemented the framing format, but it would be 
sweet if you could test out the python-snappy implementation to make sure it 
looks right. i'm a little concerned the only implementation i know of is just 
in the python library.

would it be worth trying to submit a c version to the mainline snappy library 
as well?

Original comment by jtolds on 20 Feb 2013 at 12:12

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Hi,

I tested ”python snappy.py -c < README.rst” and eyeballed the output in a 
hex editor. While this obviously won't catch all off-by-ones or things like 
wrong checksum calculations, the output does look fine to me.

I don't think there's enough interest right now to warrant a C++ implementation 
from our side, and it should probably live in a spinoff repository anyway.

Original comment by [email protected] on 20 Feb 2013 at 3:23

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
OK, given that there are now three independent implementations of the framing 
format, and at least one of them can be invoked from the command line, I'd say 
this is fixed. I'm updating the front page to reflect that python-snappy can do 
this. Thanks to everybody for participating :-)

Original comment by [email protected] on 14 Jun 2013 at 11:35

  • Changed state: Fixed

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Issue 80 has been merged into this issue.

Original comment by [email protected] on 22 Nov 2013 at 6:15

from snappy.

GoogleCodeExporter avatar GoogleCodeExporter commented on July 19, 2024
Issue 80 has been merged into this issue.

Original comment by [email protected] on 22 Nov 2013 at 6:37

from snappy.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.