Giter Club home page Giter Club logo

cascading-cassandra's People

Contributors

bitdeli-chef avatar ifesdjeen avatar jimternet avatar mccraigmccraig avatar mknoszlig avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

cascading-cassandra's Issues

Final sink step will always be skipped

Hello,

I believe I have found a bug that will always prevent sinking to Cassandra - i.e. cascading will always skip the step.

getModifiedTime() in CassandraTap.java will always return the current time - for sourcing from tables this will be 'correct', but for sinking this will cause the Cascading

isSkipFlowStep()

task (https://github.com/Cascading/cascading/blob/2.1/cascading-core/src/main/java/cascading/flow/planner/FlowStepJob.java) to always return true, due to the

return flowStep.allSourcesExist() && !flowStep.areSourcesNewer( flowStep.getSinkModified() );

step in https://github.com/ifesdjeen/cascading-cassandra/blob/master/src/main/java/com/ifesdjeen/cascading/cassandra/CassandraTap.java

I believe modifying CassandraTap to return, say a time of 0 in sink mode will always force it to return true, thus not skipping the step.

DynamicRowSink.sink() method - Cannot pass List of column names.

class: com.ifesdjeen.cascading.cassandra.sinks.DynamicRowSink
method: sink(…) <— Refer Line# 46 is not possible to occur.

----Code snippet
         Map<String, String> dataTypes = SettingsHelper.getDynamicTypes(settings);
Map<String, String> dynamicMappings = SettingsHelper.getDynamicMappings(settings);

AbstractType columnNameType = SerializerHelper.inferType(dataTypes.get("columnName"));

Object columnNameFieldSpec = dynamicMappings.get("columnName");
List<String> columnNameFields = new ArrayList<String>();
if (columnNameFieldSpec instanceof String) {
  columnNameFields.add((String) columnNameFieldSpec);
} else {
  columnNameFields.addAll((List<String>) columnNameFieldSpec);  //<—- Is there a way to have List<String> for columnNameFieldSpec??? Please look at dynamicMappings in Line#37 and columnNameFieldSpec declarations in Line# 41
}

Test failing with cassandra 2.0.2 +

cascalog queries are failing with cassandra-all 2.0.2 and 2.0.3 dependency.

Also Is there a particular reason why the project is compiled with a 1.2.11 release of cassandra? With newer versions i am not able to build the project itself.

timeouts on long-running jobs

For map-reduce style long running jobs, i am hitting the thrift request timeout when using Cassandra tap. And for now the only solution i found is to increase the timeout on Cassandra but obviously this is not the solution. Any ideas on what to do for long running jobs?

StackOverflow error when processing wide rows

When processing some wide rows I get a StackOverflowError.

I'm running the following code:

(defmapcatop verticalize-wide-rows [rowkey super-column]
  (infof "Verticalizing rowkey: %s of type: %s of size %s." rowkey (type super-column) (.size super-column))
  (vec
   (map (fn [c]
          (infof "At rowkey: %s processing column: %s of type %s." rowkey (ByteBufferUtil/string (.name c)) (type c))
          [rowkey
           (ByteBufferUtil/string (.name c))
           (ByteBufferUtil/string (.value c))])
        (.values super-column))))

(defn measurements [tap]
  (<- [?rowkey ?columnkey ?value]
      (tap ?rowkey-in ?supercolumn)
      (verticalize-wide-rows ?rowkey-in ?supercolumn :> ?rowkey ?columnkey ?value)
      (:trap (hfs-textline "/mnt3/m7/exceptions"))))


(let [tap (cassandra-tap "localhost" "m7" "Measurements" [] {})]
  (?- (hfs-textline "/mnt3/m7/data/" :sinkmode :replace)
      (measurements tap)))

I've tried it with both OpenJDK and Oracle JDK 1.6 as below. Many of the rows do process correctly.

With OpenJDK 1.6
java version "1.6.0_24"
OpenJDK Runtime Environment (IcedTea6 1.11.5) (6b24-1.11.5-0ubuntu1~12.04.1)
OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)

13/01/14 09:02:05 ERROR stream.SourceStage: caught throwable
java.lang.StackOverflowError
        at java.net.SocketInputStream.read(SocketInputStream.java:146)
        at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
        at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
        at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
        at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
        at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
        at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
        at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
        at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
        at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
        at org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:692)
        at org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:676)
        at com.ifesdjeen.cascading.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:306)
        at com.ifesdjeen.cascading.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:328)
<lots of repeats at 328>
        at com.ifesdjeen.cascading.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowItFlowException local step failed  cascading.flow.planner.FlowStepJob.blockOnJob (FlowStepJob.java:191)

<until>
cascading.flow.FlowException: internal error during mapper execution
        at cascading.flow.hadoop.FlowMapper.run(FlowMapper.java:135)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
Caused by: java.lang.StackOverflowError
        at java.net.SocketInputStream.read(SocketInputStream.java:146)
        at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
        at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
        at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
        at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
        at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
        at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
        at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
        at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
        at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
        at org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:692)
        at org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:676)

with oracle jdk
java version "1.6.0_38"
Java(TM) SE Runtime Environment (build 1.6.0_38-b05)
Java HotSpot(TM) 64-Bit Server VM (build 20.13-b02, mixed mode)

13/01/14 10:22:35 INFO r4f.core: At rowkey: 3e09a446-cbd6-411e-a8ae-6304f1e66758:relativeHumidity processing column: 2012-03-08T01:30:00+0000 => 2012-03-08T01:59:59+0000 of type class org.apache.cassandra.db.Column.
13/01/14 10:22:38 INFO mapred.LocalJobRunner: 
13/01/14 10:22:41 INFO mapred.LocalJobRunner: 
13/01/14 10:36:39 ERROR stream.SourceStage: caught throwable
java.lang.StackOverflowError
        at sun.nio.cs.UTF_8.updatePositions(UTF_8.java:58)
        at sun.nio.cs.UTF_8$Encoder.encodeArrayLoop(UTF_8.java:392)
        at sun.nio.cs.UTF_8$Encoder.encodeLoop(UTF_8.java:447)
        at java.nio.charset.CharsetEncoder.encode(CharsetEncoder.java:544)
        at java.lang.StringCoding$StringEncoder.encode(StringCoding.java:240)
        at java.lang.StringCoding.encode(StringCoding.java:272)
        at java.lang.String.getBytes(String.java:946)
        at org.apache.thrift.protocol.TBinaryProtocol.writeString(TBinaryProtocol.java:185)
        at org.apache.cassandra.thrift.ColumnParent.write(ColumnParent.java:400)
        at org.apache.cassandra.thrift.Cassandra$get_range_slices_args.write(Cassandra.java:11840)
        at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:63)
        at org.apache.cassandra.thrift.Cassandra$Client.send_get_range_slices(Cassandra.java:690)
        at org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:679)
        at com.ifesdjeen.cascading.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:306)
        at com.ifesdjeen.cascading.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:328)

<ColumnFamilyRecordReader.java:328 gets repeated a lot and then we get the following>

at com.ifesdjeen.cascading.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:328)
        at com.ifesdjeen.cascading.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:342)
        at com.ifesdjeen.cascading.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:283)
        at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
        at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
        at com.ifesdjeen.cascading.cassandra.hadoop.ColumnFamilyRecordReader.nextKeyValue(ColumnFamilyRecordReader.java:174)
        at com.ifesdjeen.cascading.cassandra.hadoop.ColumnFamilyRecordReader.next(ColumnFamilyRecordReader.java:453)
        at com.ifesdjeen.cascading.cassandra.hadoop.ColumnFamilyRecordReader.next(ColumnFamilyRecordReader.java:54)
        at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:236)
        at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:216)
        at cascading.tap.hadoop.util.MeasuredRecordReader.next(MeasuredRecordReader.java:61)
        at com.ifesdjeen.cascading.cassandra.CassandraScheme.source(CassandraScheme.java:126)
        at cascading.tuple.TupleEntrySchemeIterator.getNext(TupleEntrySchemeIterator.java:140)
        at cascading.tuple.TupleEntrySchemeIterator.hasNext(TupleEntrySchemeIterator.java:120)
        at cascading.flow.stream.SourceStage.map(SourceStage.java:76)
        at cascading.flow.stream.SourceStage.run(SourceStage.java:58)
        at cascading.flow.hadoop.FlowMapper.run(FlowMapper.java:124)
        ... 3 more

What column types are supported by cascading-cassandra?

What column types are supported by cascading-cassandra (c*^2)? For instance, when defining a CassandraScheme, the examples in the docs list Int32Type, UTF8Type, DoubleType, DateType and DecimalType types, but, how do you create a list, set or map type? Are these even supported? If so, what are the proper schema 'strings' used to define them, (e.g. "List")?
Also, are BytesType supported?

Delete Tap

Hi,
Have you considered constructing a delete tap to delete the data set from Cassandra?
Would be really handy.

Thanks,
Karthik

Column family created using cassandra-cli..'Key' data not read using Static source.

I have 1 key and 3 columns (key, id, lower, upper) created using cassandra-cli
I have inserted data using CQL3. I am able to read data through both CQL3 and CLI.

When I try to read data using Cascading-Cassandra Static source, my key value is not fetched and tuple log is shown below…

['�', 'a', 'A', 'e145fa10-87ca-11e3-a932-9bf3febcf45d']
['�', 'b', 'B', '000b3720-87cc-11e3-a932-9bf3febcf45d']
tuples count: 2

I do not know whether I am missing something or I should not mix between CLI and CQL3 tables and data.

Please guide me in this.

Trying to Configure SourceTap

Hi,

I am trying to configure a SourceTap from Cassandra to source data from a column family.

I found that the docs are not in sync with the code; I am unclear on the expected configurations for the settings(Map<String, Object>). I keep running into:

java.lang.RuntimeException: no config type specs for key: types

I have tried setting the properties that the code seems to look for :
types.dynamic, source.columns(seperated by ':') etc. but to not much luck

Any sample code here would help.

Thanks guys,
Karthik

sinking composite partition keys is not handled correctly

e.g. if a table is created with

CREATE TABLE libraries (
  name text,
  language text,
  votes int,
  primary key ((name,language)));

all the results seem to have partition key '':''

i'll get a test case together, and hopefully a fix asap

Does cascading-cassandra support Cassandra 1.2.x?

We're using Cassandra 1.2.x, Cassaforte 1.2.0, cascading-cassandra 1.0.0-rc5, clojure 1.5.1 and Cascalog 2.0.1-SNAPSHOT, all running locally; and seeing the following error when trying to create a simple sink tap to C*:

2014-03-20 18:54:24,969 ERROR checkpointed-workflow:? - Component failed cascading.flow.planner.PlannerException: could not build flow from assembly: [keyspace may not be null] at cascading.flow.planner.FlowPlanner.handleExceptionDuringPlanning(FlowPlanner.java:576) at cascading.flow.hadoop.planner.HadoopPlanner.buildFlow(HadoopPlanner.java:265) at cascading.flow.hadoop.planner.HadoopPlanner.buildFlow(HadoopPlanner.java:80) at cascading.flow.FlowConnector.connect(FlowConnector.java:459) at cascalog.cascading.flow$compile_hadoop.invoke(flow.clj:34) at cascalog.cascading.flow$fn__3262.invoke(flow.clj:63) at cascalog.cascading.flow$fn__3249$G__3244__3254.invoke(flow.clj:50) at cascalog.cascading.flow$fn__3260.invoke(flow.clj:67) at cascalog.cascading.flow$fn__3249$G__3244__3254.invoke(flow.clj:50) at cascalog.api$_QMARK__.doInvoke(api.clj:153) at clojure.lang.RestFn.invoke(RestFn.java:421) at pickles.jobs.adevent_features$_main$fn__90.invoke(adevent_features.clj:179) at cascalog.checkpoint$mk_runner$fn__6285.invoke(checkpoint.clj:60) at clojure.lang.AFn.run(AFn.java:24) at java.lang.Thread.run(Thread.java:744) Caused by: java.lang.UnsupportedOperationException: keyspace may not be null at org.apache.cassandra.hadoop.ConfigHelper.setOutputKeyspace(ConfigHelper.java:126) at org.apache.cassandra.hadoop.ConfigHelper.setOutputColumnFamily(ConfigHelper.java:152) at com.ifesdjeen.cascading.cassandra.BaseCassandraScheme.sinkConfInit(BaseCassandraScheme.java:124) at com.ifesdjeen.cascading.cassandra.CassandraScheme.sinkConfInit(CassandraScheme.java:160) at com.ifesdjeen.cascading.cassandra.CassandraScheme.sinkConfInit(CassandraScheme.java:32) at cascading.tap.Tap.sinkConfInit(Tap.java:204) at cascading.flow.hadoop.HadoopFlowStep.initFromSink(HadoopFlowStep.java:422) at cascading.flow.hadoop.HadoopFlowStep.getInitializedConfig(HadoopFlowStep.java:101) at cascading.flow.hadoop.HadoopFlowStep.createFlowStepJob(HadoopFlowStep.java:201) at cascading.flow.hadoop.HadoopFlowStep.createFlowStepJob(HadoopFlowStep.java:69) at cascading.flow.planner.BaseFlowStep.getFlowStepJob(BaseFlowStep.java:768) at cascading.flow.BaseFlow.initializeNewJobsMap(BaseFlow.java:1229) at cascading.flow.BaseFlow.initialize(BaseFlow.java:199) at cascading.flow.hadoop.planner.HadoopPlanner.buildFlow(HadoopPlanner.java:259) ... 13 more

Any thoughts or suggestions would be most welcome!

Cassaforte calls not-existent method set_cql_version

After creating the keyspace CascadingCassandra I run midje tests for cascading-cassandra, which uses cassaforte version "1.0.0-20120920.162529-8", and it attemps to call a method of org.apache.cassandra.thrift.Cassandra.Client that does not exist.

"Exception in thread "main" java.lang.RuntimeException: org.apache.thrift.TApplicationException: Invalid method name: 'set_cql_version'"

The call is made at https://github.com/clojurewerkz/cassaforte/blob/master/src/java/clojurewerkz/cassaforte/CassandraClient.java#L70

This could be a conflict of versions. Note that running with lein-pedantic shows a number conflicts causing potentially unexpected versions of libraries to be used.

Example for dynamic/wide row with CompositType

Can you please provide example of passing parameters for a dynamic column family with CompositeType for the key?

sample column family:
CREATE COLUMN FAMILY runs_by_year_player_dynamic
WITH COMPARATOR='CompositeType(Int32Type, UTF8Type)'
and key_validation_class='Int32Type'
AND default_validation_class='UTF8Type';

Note:
I could not find a way to label this as a question. So created as an issue?

Thanks.

Specifying a query as the tap

From what I understand the tap is for an entire column family, is there a way to specify a query to use so that the entire column does not need to be loaded?

Handling null values

Hello,
i am not sure what is the best solution for this, but when there are null values in a row in cassandra, the entire operation fails similar to this:

Caused by: java.lang.NullPointerException
at org.apache.cassandra.utils.ByteBufferUtil.toInt(ByteBufferUtil.java:416)
at org.apache.cassandra.cql.jdbc.JdbcInt32.compose(JdbcInt32.java:94)
at org.apache.cassandra.db.marshal.Int32Type.compose(Int32Type.java:34)
at org.apache.cassandra.db.marshal.Int32Type.compose(Int32Type.java:26)
at com.ifesdjeen.cascading.cassandra.hadoop.SerializerHelper.deserialize(SerializerHelper.java:41)
at com.ifesdjeen.cascading.cassandra.sources.CqlSource.source(CqlSource.java:45)

Should these be handled with traps on application side or should it get a proper value for nulls based on the mapping?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.