zshamrock / dynocsv Goto Github PK
View Code? Open in Web Editor NEWExports DynamoDB table into CSV
Home Page: https://snapcraft.io/dynocsv
License: MIT License
Exports DynamoDB table into CSV
Home Page: https://snapcraft.io/dynocsv
License: MIT License
Using meta linter and github hook.
Mainly Limit parameters to stay in sync with other namings.
Currently it handles only bool, string and number, but would make sense to handle other available data types too, although would need to wrap in "" (quotes) likely to escape the comma when elements are separated (in the case of complex data structures).
Also, it will not hit up to the max of the RCU configured, but only up to 80% of it.
This would require to fetch the table's description and all indexes.
It allows filtering data by the corresponding hash value.
It will look like --hash <value>
, probably more flexible would be to implement full custom filtering support as #11 (i.e. --filter "Name = Value"
), but hash
specific alternative is simpler and faster to implement (as currently don't have much free time to work on the generic approach).
When columns are not specified explicitly, the order of the columns per runs are different. To minimize that might be better to sort columns alphabetically.
It doesn't guarantee the columns to be in the same order for each, but overall the order will be similar at least.
This further means it will simply read and discard the data, i.e. no save to disk, but the pure time of getting the data from DynamoDB.
This is mainly to evaluate the performance of the particular query.
Should this option only be enabled if the query options are set?
It is the opposite of the columns
flag, where it is possible to filter out only the particular columns instead.
The syntax is yet TBD. Although the way I see this could work is to query the data, and then query one by all the foreign tables, and join programmatically the result.
Only for queries.
--join=<table>.<attribute name> on <table>.<attribute name>
For on
clause technically we can omit the
As currently per #12 the internal buffer size before flushing is 1000 records, but if the total items count or tables size in bytes is in the acceptable range, consider increase that buffer size limit accordingly.
Like
dynocsv --filter <AttributeName> <Operator> <Values>
ex.:
dynocsv --filter Sessions Contains S1
--sort-gt 1529665592540
returned back all the values.
Also, have to check the behavior on the table query.
i.e.
--group-by <attribute name>
and (just the draft right now)
--aggr=<function>(<attribute name>)
ex.:: --aggr=sum(Passes)
.
Obviously could be multiple aggregated attributes.
By default, it assumes hash and sort correspond to the table's hash and sort attributes, but if index should be used introduce --index
parameter with the index name as the value.
By default also print how many pages has been processed so far, and allow to suppress this log by adding new quiet/q option.
Have to create the final CSV file with the columns and then copy the content from the actual data file.
package github.com/cpuguy83/go-md2man/v2/md2man: cannot find package "github.com/cpuguy83/go-md2man/v2/md2man" in any of:
/usr/lib/go-1.6/src/github.com/cpuguy83/go-md2man/v2/md2man (from $GOROOT)
/root/go/src/github.com/cpuguy83/go-md2man/v2/md2man (from $GOPATH)
Describe how to provide/set the AWS profile used and options.
Currently it prints the stack trace.
Allows additionally to further limit the data by using the sort key value, when using #15. Same objectives apply as in the case of the #15.
Although the difference would be here as for the hash key only =
operator is supported, for the sort key multiple different operators are supported, like gt, ge, lt, le, begins with (for strings), between.
So there will be separate sort cli options, i.e.
And only sort type argument will be allowed.
To allow to limit the data for up to the specified argument value.
It is the extension of #24 where if provided that one will be used instead of the value based on the RCU (also in the case when On Demand is set RCU will be 0).
--norate would assume run without taking RCU consumption or rating into account assuming the user knows what he is doing.
As when distributed as a snap #1 the user explicitly has to connect to the aws-config-credentials
interface, i.e. snap connect dynocsv:aws-config-credentials
, and if not, use of other env variables could be the alternative option.
And publish the built image into the docker hub.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.