Comments (9)
I would really like to see something like this:
while the csv and json importers are useful, they are not generic enough to import arbitrary data: I was playing around recently and wanted to do a bulk import of data, roughly 2GB. It would be nice to be able to load and process such a file immediately from within arango, but the File.read function in the fs
module (is this module even officially there? I had to dig around quite a lot to find it) will always read the entire file.
It would be great to have a more versatile read
that can be provided length or buffers or callbacks. This way the import tools can be rewritten in arango directly (I think I recall the json and csv importer imported via the http API?)
from arangodb.
I pre-process my raw data to csv and then use the importer. Works fine. ;) What would the benefit be from moving this to ArangoDb?
from arangodb.
Hi, would like to chime in, but I am not sure what you mean by "What would the benefit be from moving this to ArangoDb?" π Can you elaborate, on what you want to do, what you did, and what the last sentence means? π
from arangodb.
Hi Frank,
If I understood correctly, @a2800276 wants to be able to process his raw data and enter them into the db from withing arangosh.
I was wandering why the devs should invest time to this feature since one can easily process his raw data into csv/json (via any language, say PHP or Python, or Bash) and use the already working importer.
from arangodb.
Oh, yes, didn't notice the different users π . Yes of course, I totally agree with @rotatingJazz on that.
@a2800276 is there some specific reason not to use the import? Is there some edge case that you're trying to tackle?
I have had no issues for importing external data, so far, so I am interested in your edge case π
from arangodb.
I pre-process my raw data to csv and then use the importer.
To me it seems very elaborate to preprocess data, that may or may not be in a form suitable for CSV/JSON, transforming it to a different format, throwing that against a --functionally restricted-- import script which then uses HTTP to import individual records to the database.
When instead:
I could be reading and transforming arbitrarily formatted files from within DB and have a much more efficient workflow, both from the "programmer efficiency" point of view and in terms of performance.
What I was trying to do concretely:
re-implement a toy project to play with graph functionality that I have working for neo4j in arango. I'd like to importethe wikipedia inter-page links and play around with that dataset. The dump of that data is 4GB, (in the form of mysql INSERT statements). If I can avoid it, I don't want to preprocess 4GB of data into 3GB of some other data that I can import when I could import directly in ~half the time.
More generally:
Since arango wants to become a general purpose deployment platform with Foxx, then it will certainly need some rudimentary file io implementation. As it's currently implemented, File.read is utterly useless apart from reading tiny toy files.
from arangodb.
It might be interesting to have some reference data that one could try to
'feel' or confirm the problem with the current importer; maybe the problem
can be shown with similar data from here e.g:
- http://www.imdb.com/interfaces (movies, actors, ...?)
- http://www.datawrangling.com/some-datasets-available-on-the-web
from arangodb.
We'll eventually have an implementation of Buffer, which will allow us to read binary files and process them in chunks from JavaScript.
Until that's available, I think there are two alternatives available at least for processing CSV and JSON files.
They should work incrementally and process the input file line-wise (not quite true for CSV but think of them as working line-wise). They allow supplying a callback function that is executed whenever a record was read. You can then use the callback to process the data and put it into the database.
Example invocation for CSV files:
var internal = require("internal");
// var options = { separator: ",", quote: "\"" };
internal.processCsvFile("test.csv", function (data) { internal.print(data); }, options );
And for p```
var internal = require("internal");
internal.processCsvFile("test.csv", function (data) { internal.print(data); } );
Processing JSON files is similar:
var internal = require("internal");
internal.processJsonFile("test.json", function (data) { internal.print(data); } );
Note that the above function aren't general purpose file-processing functions, but targeted for handling UTF-8 encoded CSV and JSON data.
For arbitrary file formats, we'll need an implementation of Buffer in Javascript.
from arangodb.
Closed because processCsvFile and processJsonFile are doing what I intended.
from arangodb.
Related Issues (20)
- Please make Docker image of v3.12.0 available HOT 2
- GPG key expired on 03/23/2024 HOT 1
- [Optimize graph traversal] How to skip startVertexs that exist in the previous graph traversal results. HOT 2
- γAQL Grammarγhow to write the AQL statement which's function equal to "g.inE("tech").otherV().inE('friends').otherV()" ? HOT 2
- Multiple vulnerabilities in Node JS modules shipped with ArangoDB 3.11.8 HOT 8
- aardvark: display of "info" tab data is very slow with v3.12
- Add a way to persist the webui-Graph settings including start vertex HOT 4
- A while after upgrade to v3.12.0, unable to create documents: "Corruption: Compaction sees out-of-order keys" HOT 3
- 3.12 webui fails soon after login
- Why does a Graph sometimes look mixed up and sometimes it's normal again?
- arangodump fails to authenticate with '@@' in the password HOT 4
- Support for Loading Databases from URL Path or Embedded Storage Mode HOT 5
- Feature Request: Seperate query from data for easier and safer operations HOT 3
- How to listen to changes in Arango Collection HOT 1
- RocksDB encountered a background error during a compaction operation: HOT 2
- GraphAR export / import
- optimizer should consider projections and stored values when selecting indexes HOT 5
- Render large graph - rendering bug - select graph traversal order
- Fedora 40 fatal error when login
- Update leader election failed. error="context deadline exceeded": Deployment Using the ArangoDB Starter in Docker
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from arangodb.