Comments (4)
sorry for the late response. finally I was able to figure out the issue, posting it here so it can be useful for others. DSBulk was able properly unload and load the blob data. I confirmed that by testing my application pointing both the clusters. However the issue I reported is mainly a Python bug. Earlier I was using a cqlsh from python-pip and later I switched to the correct version of Cqlsh that is distributed from the cassandra tar and the output looked fine.
from dsbulk.
Hi thanks for reporting this.
Please note that DSBulk is not compatible with CQLSH COPY. Each tool formats and parses CSV data differently and you should never expect that the output produced by one tool will work if used as the input for the other tool.
Would you mind giving me a full example to reproduce the problem? The snippet you gave is not clear enough as I don't understand if this is expected to be a CSV file or a table content or something else.
Also, you might have some schema issues as well: your snippet mentions "'ascii' codec" but binary data cannot be stored in ascii columns, you should use blob columns instead.
Good luck!
from dsbulk.
@jaibheem were you able to fix your issue?
from dsbulk.
Closing as we didn't hear back from the reporter.
from dsbulk.
Related Issues (20)
- Exception in thread "main" java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory HOT 9
- dsbulk unload stuck when config -maxConcurrentFiles (write concurrency) greater than 1 HOT 1
- DSBulk Java API
- DSBulk dependency on `logback` implementation
- `ClassLoader` aware DSBulk
- `maxRecords` flag does not apply to write operations
- DSBulk count doesn't work on tables with just partition keys
- dsbulk compat with vector type HOT 4
- Loading from AWS S3 large file gives "Required array length is too large" error HOT 2
- Cannot import multiple values in a map<T,T> column using CSV files
- Add support for loading/unloading vector type data HOT 1
- dsbulk doesn't support toUnixTimestamp? HOT 4
- Parsing trouble when a column is called "vector" HOT 6
- Parsing vector data from JSON fails for "floats" with too many digits (aka doubles) HOT 1
- Split when unloading into smaller files
- Escape character when unloading
- DSBulk unload fails to parse map[value] as provided in query HOT 2
- Windows version only works when dsbulk in in short folders
- DSBulk DELETE can not accept any ranges on the clustering column when used within -query
- Allow file input for dsbulk unload
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dsbulk.