Many of the comments seem to address the design of key hashing. The reason for using hashed keys inside B-tree nodes instead of the string keys directly is threefold:
1) The implementation is simplified.
2) When performing a lookup, it is faster to compare fixed-sized elements than it is to do variable length string comparison.
3) The key length is unlimited.
I should say the documentation page is out of date regarding hash collisions. The format now supports probing thanks to a PR merged yesterday. So inserting colliding keys will actually work.
It is true that databases and other formats do store string keys directly in the nodes. However as a memory format, runtime performance is very important. There is no disk or IO latency to 'hide behind'.
Right now the hash function used is DJB2. It has the interesting property of somewhat preserving the lexicographical ordering of the key names. So hashes for keys like "item_0001", "item_0002" and "item_0003" are actually more likely to also be placed sequentially inside the B-tree nodes. This can be useful when doing a sequential scan on the semantic key names, otherwise you are doing a lot more random access.
Also DJB2 is so simple that it can be calculated entirely by the C preprocessor at compile time, so you are not actually paying the runtime cost of hashing.
We will be doing a lot more testing before DJB2 is finalized in the spec, but might later end up with a 'better' hash function such as XXH32.
Finally, TRON/Lite³ compared to other binary JSON formats (BSON, MsgPack, CBOR, Amazon Ion) is different in that:
1) none of the formats mentioned provide direct zero-copy indexed access to the data
2) none of the formats mentioned allow for partial mutation of the data without rewriting most of the document
This last point 2) is especially significant. For example, JSONB in Postgres is immutable. When replacing or inserting one specific value inside an object or array, with JSONB you will rewrite the entire document as a result of this, even if it is several megabytes large. If you are performing frequent updates inside JSONB documents, this will cause severe write amplification. This is the case for all current Postgres versions.
TRON/Lite³ is designed to blur the line between memory and serialization format.
Lite^3 deserves to be noticed by HN. u/eliasdejong (the author) posted it 23 days ago but it didn't get very far. I'm hoping this time it gets noticed.
FTA#1: “Hashmaps do not (efficiently) support range queries. Since the keys are stored in pseudorandom order”
FTA#2: “Object keys (think JSON) are hashed to a 4-byte digest and stored inside B-tree nodes”
It still will likely be faster because of better cache locality, but doesn’t that means this also does not (efficiently) support range queries?
That page also says
“tree traversal inside the critical path can be satisfied entirely using fixed 4-byte word comparisons, never actually requiring string comparisons except for detection of hash collisions. This design choice alone contributes to much of the runtime performance of Lite³.”
How can that be true, given that this beats libraries that use hash maps, that also rarely require string comparisons, by a large margin?
“Inserting a colliding key will not corrupt your data or have side effects. It will simply fail to insert.”
I also notice this uses the DJB2 hash function, which has hash collisions between short strings (http://dmytry.blogspot.com/2009/11/horrible-hashes.html), and those are more likely to be present in json documents. You get about 8 + 3 × 5 = 23 bits of hash for four-character strings, for example, increasing the risk of collisions to, ballpark, about one in three thousand.
=> I think that needs fixing before this can be widely used.
Looking at the actual code (https://github.com/fastserial/lite3/blob/main/src/lite3.c#L2...), it seems like it performs up to 128 probes to find a target before failing, rather than bailing immediately if a collision is detected. It seems like maybe the documentation needs to be updated?
It's a bit unfortunate that the wire format is tied to a specific hash function. It also means that the spec will ossify around a specific hash function, which may not end up being the optimal choice. Neither JSON nor Protobuf have this limitation. One way around this would be to ditch the hashing and use the keys for the b-tree directly. It might be worth benchmarking - I don't think it's necessarily any slower, and an inline cache of key prefixes (basically a cheapo hash using the first N chars) should help preserve performance for common cases.
> It seems like maybe the documentation needs to be updated
Looks like it, yes:
/**
Enable hash probing to tolerate 32-bit hash collisions.
Hash probing configuration (quadratic open addressing for 32-bit hashes:
h_i = h_0 + i^2)
Limit attempts with `LITE3_HASH_PROBE_MAX` (defaults to 128).
Probing cannot be disabled.
*/
#ifndef LITE3_HASH_PROBE_MAX
#define LITE3_HASH_PROBE_MAX 128U
#endif
#if LITE3_HASH_PROBE_MAX < 2
#error "LITE3_HASH_PROBE_MAX must be >= 2"
#endif
> It also means that the spec will ossify around a specific hash function
It is a bit ugly, and will break backwards compatibility, but supporting a second hash function isn’t too hard.
You can, on load, hash a few keys, compare them to the hashes, and, from that, if the input has many keys with high probability, infer which hash function was used.
There also might be spare bit somewhere to indicate ‘use the alternative hash function’.
> The JSON standard requires that the root-level type always be an ‘object'
> or 'array'. This also applies to Lite³.
I don’t think that is true, and https://www.json.org/json-en.html agrees with that. Single values (numbers, strings, booleans, null) also are valid json.
This needs more attention than it's getting. Perhaps if you made some changes to the landing pages could help?
"outperforms the fastest JSON libraries (that make use of SIMD) by up to 120x depending on the benchmark. It also outperforms schema-only formats, such as Google Flatbuffers (242x). Lite³ is possibly the fastest schemaless data format in the world."
^ This should be a bar graph at the top of the page that shows both serializing sizes and speeds.
It would also be nice to see a json representation on the left and a color coded string of bytes on the right that shows how the data is packed.
As already mentioned in other comments, it doesn't really make sense to compare to json parsers since lite3 parses, well, lite3 and not json. It serves a different use case and I think focusing on performance vs json (especially json parsers) is not the best thing about this project
This is cool, but the headline makes it sound like the wire format is json compatible which is not the case. I'm not really sure why there is a focus on json here at all - its the least interesting part of this and the same could be said for almost every serialization format.
“By default, deleted values are overwritten with NULL bytes (0x00). This is a safety feature since not doing so would leave 'deleted' entries intact inside the datastructure until they are overwritten by other values. If the user wishes to maximize performance at the cost of leaking deleted data, LITE3_ZERO_MEM_DELETED should be disabled.”
hash collision limitation for keys is the most questionable part of design. Usually thats handled by forcing key lookup to verify that what you looked up matches what you tried to lookup.
Resolving this perf hit is probably doable by having an extra table of conflicting hashes
Apache Arrow is trying to do something similar, using Flatbuffer to serialize with zero-copy and zero-parse semantics, and an index structure built on top of that.
So it's not really a serialization format, it's a compact, modifiable untyped tree, that one can therefore send to another machine with the same architecture. Or deserialise into native language specific data structures.
Don't get me wrong, I find this type of data structures interesting and useful, but it's misleading to call it "serialization", unless my understanding is wrong.
I'm not sure what the distinction you are trying to make here is?
How does machine architecture play into it? It sounds like int sizes are the same regardless of word sizes of the machine, the choices made just happen to have high performance for common machine architectures. Or is it about endianess? Do big endian machines even exist anymore?
GLTF is like this too (or PLY)? The main difference is the format of their headers? Just by reading the header you can parse the binary data. I'm surprised BSON and any of the other binary JSON formats they list don't support reading the memory layout in a header.
First of all, hello Hacker News :)
Many of the comments seem to address the design of key hashing. The reason for using hashed keys inside B-tree nodes instead of the string keys directly is threefold:
1) The implementation is simplified.
2) When performing a lookup, it is faster to compare fixed-sized elements than it is to do variable length string comparison.
3) The key length is unlimited.
I should say the documentation page is out of date regarding hash collisions. The format now supports probing thanks to a PR merged yesterday. So inserting colliding keys will actually work.
It is true that databases and other formats do store string keys directly in the nodes. However as a memory format, runtime performance is very important. There is no disk or IO latency to 'hide behind'.
Right now the hash function used is DJB2. It has the interesting property of somewhat preserving the lexicographical ordering of the key names. So hashes for keys like "item_0001", "item_0002" and "item_0003" are actually more likely to also be placed sequentially inside the B-tree nodes. This can be useful when doing a sequential scan on the semantic key names, otherwise you are doing a lot more random access. Also DJB2 is so simple that it can be calculated entirely by the C preprocessor at compile time, so you are not actually paying the runtime cost of hashing.
We will be doing a lot more testing before DJB2 is finalized in the spec, but might later end up with a 'better' hash function such as XXH32.
Finally, TRON/Lite³ compared to other binary JSON formats (BSON, MsgPack, CBOR, Amazon Ion) is different in that:
1) none of the formats mentioned provide direct zero-copy indexed access to the data
2) none of the formats mentioned allow for partial mutation of the data without rewriting most of the document
This last point 2) is especially significant. For example, JSONB in Postgres is immutable. When replacing or inserting one specific value inside an object or array, with JSONB you will rewrite the entire document as a result of this, even if it is several megabytes large. If you are performing frequent updates inside JSONB documents, this will cause severe write amplification. This is the case for all current Postgres versions.
TRON/Lite³ is designed to blur the line between memory and serialization format.
Perhaps I should have posted this URI instead: https://lite3.io/design_and_limitations.html
Lite^3 deserves to be noticed by HN. u/eliasdejong (the author) posted it 23 days ago but it didn't get very far. I'm hoping this time it gets noticed.
FTA#2: “Object keys (think JSON) are hashed to a 4-byte digest and stored inside B-tree nodes”
It still will likely be faster because of better cache locality, but doesn’t that means this also does not (efficiently) support range queries?
That page also says
“tree traversal inside the critical path can be satisfied entirely using fixed 4-byte word comparisons, never actually requiring string comparisons except for detection of hash collisions. This design choice alone contributes to much of the runtime performance of Lite³.”
How can that be true, given that this beats libraries that use hash maps, that also rarely require string comparisons, by a large margin?
Finally, https://lite3.io/design_and_limitations.html#autotoc_md37 says:
“Inserting a colliding key will not corrupt your data or have side effects. It will simply fail to insert.”
I also notice this uses the DJB2 hash function, which has hash collisions between short strings (http://dmytry.blogspot.com/2009/11/horrible-hashes.html), and those are more likely to be present in json documents. You get about 8 + 3 × 5 = 23 bits of hash for four-character strings, for example, increasing the risk of collisions to, ballpark, about one in three thousand.
=> I think that needs fixing before this can be widely used.
It's a bit unfortunate that the wire format is tied to a specific hash function. It also means that the spec will ossify around a specific hash function, which may not end up being the optimal choice. Neither JSON nor Protobuf have this limitation. One way around this would be to ditch the hashing and use the keys for the b-tree directly. It might be worth benchmarking - I don't think it's necessarily any slower, and an inline cache of key prefixes (basically a cheapo hash using the first N chars) should help preserve performance for common cases.
Looks like it, yes:
> It also means that the spec will ossify around a specific hash functionIt is a bit ugly, and will break backwards compatibility, but supporting a second hash function isn’t too hard.
You can, on load, hash a few keys, compare them to the hashes, and, from that, if the input has many keys with high probability, infer which hash function was used.
There also might be spare bit somewhere to indicate ‘use the alternative hash function’.
Reading the code (nice-looking, BTW, for C code, but since it is C code, also full of warnings that other languages can protect you from) I spotted this (https://github.com/fastserial/lite3/blob/acbb97984eca1183ddc...):
> The JSON standard requires that the root-level type always be an ‘object' > or 'array'. This also applies to Lite³.
I don’t think that is true, and https://www.json.org/json-en.html agrees with that. Single values (numbers, strings, booleans, null) also are valid json.
"outperforms the fastest JSON libraries (that make use of SIMD) by up to 120x depending on the benchmark. It also outperforms schema-only formats, such as Google Flatbuffers (242x). Lite³ is possibly the fastest schemaless data format in the world."
^ This should be a bar graph at the top of the page that shows both serializing sizes and speeds.
It would also be nice to see a json representation on the left and a color coded string of bytes on the right that shows how the data is packed.
Then the explanation follows.
Apache Arrow is trying to do something similar, using Flatbuffer to serialize with zero-copy and zero-parse semantics, and an index structure built on top of that.
Would love to see comparisons with Arrow
Don't get me wrong, I find this type of data structures interesting and useful, but it's misleading to call it "serialization", unless my understanding is wrong.
How does machine architecture play into it? It sounds like int sizes are the same regardless of word sizes of the machine, the choices made just happen to have high performance for common machine architectures. Or is it about endianess? Do big endian machines even exist anymore?