Comments (34)
@sritchie how do you see this working? I can see defining a bunch of thrift structs that mirror the existing algebird case classes, and making sure all the logic for working with them is moved into the corresponding monoids... or I guess I could see using implicits to make Rich* wrapper classes and put behavior there... but is there some fancier way to use Scrooge that would let it instantiate objects that we could actually define methods on directly?
from algebird.
I don't think so, re: the fancy interactions with Scrooge. My idea here was to define the thrift mappings as you've described, then write Bijections between the case classes and thrift classes.
Using Scrooge and moving logic out of the case classes into the Monoids is a better idea :) That, or we forgo thrift and its versioning woes and go with the Bufferable solution.
from algebird.
Bufferable makes me uneasy, to be honest; feels like a step backwards. Having a good separate spec for the structures and their serialization - that could even be used from other languages, in theory - would be nice. Having looked at it a bit, though: with Thrift we're going to run into problems with the lack of recursive types... we'd do a little better with protobuf (or Avro, probably).
from algebird.
I'm going to wake this issue up - I think if we can get this right (and that probably does not involve using Thrift) we'll remove a good amount of pain from using these data structures in Summingbird.
from algebird.
Looks like more discussion on this happened here as well: #463
from algebird.
I'd just like to mention that we are probably better off using thrift / protobuf / avro / etc. than using our own serialization format. Not because we couldn't come up with a good format, but because ensuring that we never accidentally introduce an incompatible change to the format is hard, and these libraries do a pretty good job of giving you rules for what you can / can't do if you want to maintain long term compatibility.
That said, it's a burden on pretty much everyone, whichever one of the above we do pick. If your company is using avro and we pick protobuf, now you've got to use both avro and protobuf. I don't know if there's any alternatives that don't involve code generation libraries, but I think if we can use some sort of serialization library that thinks explicitly about compatibility over time that'd be helpful.
If not, then I thin we'll need some very strict tests around compatibility and we'll need to keep old serialized copies of these objects around for unit tests. I think it's difficult to get right unfortunately.
from algebird.
the problem with picking something else is that then we have version hell, and not everyone uses that other thing.
If we just have a tested system with golden-data committed we can handle the conversion to-and-from bytes fairly easily.
The main thing is that we would then have tests to see if we changed anything.
If you want thrift, you all have it: just define an injection from algebird -> thrift and never change your thrift.
from algebird.
It's not the correctness of the bijection, it's the correctness over time as the schema changes (if it changes). It's pretty easy to miss an unexpected corner case. And introducing even one incompatible change causes a lot of headache for users because they often don't find out until months later, when they realize a chunk of their old data isn't compatible anymore.
I think if we build our own serialization format, we'll need some very strict tests. We will also have to make sure that we check in new golden data every time we make a change to the relevant code. Or we can do what other libraries do and introduce versioned data formats, where we write what version something was serialized with into the output. There's definitely options but none of them are super straightforward.
Maybe something we could try is a shaded copy of protobuf / thrift / avro -- keep that detail entirely private but use their schema compatibility features.
from algebird.
yes, the shaded copy of protobuf is what I was going to suggest for one approach. I do agree with @johnynek though that it could be hellish.
I think we should very slowly opt data structures into this system. We can be fairly strict - if we ever change the representation, we have to make a new data structure.
from algebird.
The thing is, I don't think is solves the problem. You still need the same testing. You can just as easily break backwards compatibility by changing the shape of your protobuf.
from algebird.
Yeah, but protobuf has itself been tested. You only need to test that you follow the right rules for changing the protobuf schema.
from algebird.
yeah, but unless we trust humans to do that, we still need to build the golden data sets.
from algebird.
libraires like Lucene use versioned serialization, and support reading / writing old versions in all newer versions. That's a little bit of a separate problem though.
from algebird.
actually, I don't think serialization itself is that hard (at all). The challenge here is testing and not breaking.
People will make mistakes, so if we do this project, I don't see a ton of value unless we have strong tests on it.
from algebird.
@johnynek you actually only need a test that the protobuf schema hasn't changed incorrectly. So instead of a golden data set, you just need a copy of the old schema
from algebird.
if you build a schema migration checker.
from algebird.
yes exactly
from algebird.
Whatever we decide, I should state quickly my reason for waking this up!
I felt a few years ago, and I still feel now, that serialization is the primary blocker for folks leaning in to using these more advanced data structures. Adding Kryo was a HUGE help for Cascalog/Scalding.... unfortunately folks have to come back down to earth after their initial exploration and figure out how to store data The Right Way.
We should be providing really tight serializations for all of the data structures that we're all using in production.
The reason I don't really like using the existing frameworks is that it's hard to publish/share the schemas. Better to just rely on algebird to serialize/deserialize and maintain compatibility here. If you really want to store it your way, great! Write your own serializer. Totally fine. But we want all of the powerful data structures we've built to work out-of-the-box with excellent defaults when used with Summingbird / Scalding.
from algebird.
I'm concerned the translation to an intermediate data structure will be a pretty high perf cost given that the real issue is data stability (which I think requires the golden data to guard against any accidental changes to the underlying schemas/library even in the other case).
from algebird.
I would like to wake this issue again as this is primarily the biggest impediment to introducing algebird to a larger audience at my current $work. Data scientists(read python programmers/matlab ninjas) hesitate to reach for some of the data structure as they inevitably run into serialization/persistence issue. One of the reason HLL is so widely used is because it comes with toByte
/fromByte
methods. Can we at least provide examples on how to serialize CMS QTree etc in any framework (thrift, avro etc) so that people can translate to their in house serialization framework? Right now they dont even know where to start.
from algebird.
@MansurAshraf I think everyone likes the idea. I think we are all just trying to get the other one to do it. :)
from algebird.
@johnynek algebird-internal already has all the good stuff, we just need to convince @isnotinvain to open source it.
from algebird.
yeah, or maybe @benpence @piyushnarang @sriramkrishnan etc...
from algebird.
from algebird.
from algebird.
Thanks @isnotinvain, any idea on timelines?
from algebird.
I'm asking CDL team what they think (I switched teams), but I think we should be able to do this.
from algebird.
There's also good discussion in this thread from before. We never really picked how we wanted to handle this. We can just make our thrift stuff available, but it's going to add a thrift dependency to algebird, isn't going to help folks that wanted to use protobuf or avro, and w/o a schema compatibility checker, it'd be good to have some more tests too. There was also talk of shading in this thread as well. Any thoughts on those issues?
from algebird.
my 2cents are that you guys should provide the existing thrift serializer as reference implementations so that you are not responsible for schema evolution etc. however this give other people a way to port it to their own binary format such as avro, proto etc. In my case we dont use thrift so I will have to implement the same serializer in Avro & Proto for different teams, but if I have a battle tested thrift implementation to based my work off of, i would be much more confident that i implemented the serializers correctly
Let me know what you think
from algebird.
Yeah, that seems fine. Ideally I guess we'd have all three (thrift, avro, proto) in algebird sub-modules if / when they get contributed. (don't know if your $work will let you do that).
from algebird.
Hello all,
I'm working on a new project that uses CMS and I choose to seize this opportunity to introduce algebird in the project. I'm very impressed by this project and how powerful this is one the initial abstraction level is integrated.
I reckon I'm still having to level-up on the bijection stuff and how they relate to serialization though. Posting here after following up #463 and #172 as I need to serialize CMSInstance to protobuf (I could do with thrift too) so as to read them back from another language.
I'm willing to contribute any development upstream but for now I'm missing the right entry point on how leverage bijections to do that (instead of a hand written protbuf/thrift/... serializer deserializer).
Can you provide any example or guidance or preliminary design so that my contribution is aligned with what you have in mind ?
from algebird.
So this is the Algebird CMSInstance class
case class CMSInstance[K](
countsTable: CMSInstance.CountsTable[K],
override val totalCount: Long,
override val params: CMSParams[K]) extends CMS[K](params)
1 - create an avro/thirft/proto schema that closely matches this case class
2 - use your favorite scala library to generate case class from your schema
3 - map CMSInstance to your generated case class
4 - serialize your case class using avro/proto/thrift etc. I think Bijection has serializers for thirft & avro. Not sure about protobuf
Things that will trip you:
As you can see that CMSInstance[K] has a type parameter K. This is used to implicitly inject a Hasher instance which makes serialization tricky. One way you can solve that is by fixing K to Array[Byte]. There is already a Hasher instance for Byte array and it has a contramap method that you can use to map from your type to array byte
from algebird.
from algebird.
from algebird.
Related Issues (20)
- Release cadance of algebrid, next planned release? HOT 5
- New point release that includes BigDecimal semigroup HOT 10
- Tuple2 implicits do not work with Arrays HOT 9
- PriorityQueueMonoid- Is it strictly a monoid? HOT 3
- Algebird-spark with Spark 2.{3, 4} and Scala 2.12 HOT 2
- Scala 2.13.x HOT 13
- Idea: BloomFilterAggregator can directly rely on a BitSet instead of a BFMonoid
- use fast immutable bitset for better bloom filter HOT 6
- The GeneratedTupleAggregator is not recognized HOT 2
- Add combinators on Semigroup, Monoid, Aggregator to reverse order. HOT 1
- flakey test: downsiding HLL to 4 bits HOT 1
- investigate flaky test on AdaptiveVector
- Latest version published with `-optimize` HOT 1
- configure mergify.io HOT 1
- H
- Investigate flaky `com.twitter.algebird.CollectionSpecification` test HOT 8
- Possible buffer overflow for lengths over 127. Why are you doing .toByte in order to represent an integer size of 4 bytes? For example item length of 184 will result in -72 and thus in NegativeArraySize Exception when deserializing in fromBytes : 94. HOT 1
- Possible buffer overflow for lengths over 127. HOT 4
- SparseVector monoid flake
- New release? HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from algebird.