Comments (6)
I don't think it's unreasonable to expect that even if you pass the entire large dataset as a string or Buffer that it would still process it in chunks to reduce memory usage. A 300MB Buffer of CSV data could easily use several GB of RAM if you parse all the rows in advance. Putting 300MB in RAM might not be a big deal, but expecting the rows to parse in small batches so the additional memory usage is minimal.
At least, this issue pretty much took me 6 hours to figure out and find a workaround yesterday, so If filed the issue here as I could imagine others being unpleasantly surprised by it.
The workaround isn't super hard to implement once you know it's needed, but it's not very obvious that it should be.
from node-csv.
Ok, here it is:
https://www.npmjs.com/package/@sciactive/back-pressure-transform
Try it out and see if it works for you.
from node-csv.
I messed around with this for a while trying to figure out a solution, but this seems to be a case that just isn't handled by the Transform
API. After the last buffer is provided to the transform, it will call _flush
and close the stream and you don't get more chances to try to push the remaining data even if there was back pressure. Kind of an odd gap in the API.
Anyway, my workaround for now is to split up the incoming buffers so that they are always small, that seems to work OK. But with this issue lingering I suppose there will be others in the future who are bitten by the same issue.
from node-csv.
Is it not the responsibility of the Stream Reader (input
in your case) to provide smaller chunks ? If someone doesn't control the chunk size of its input, maybe he could insert a custom transformer between the input and the parser. From the parser's standpoint, processing all the data it receives seems fair.
from node-csv.
With a bit more research I think maybe to resolve this would require implementing Duplex directly instead of using Transform. Kind of a pain, but the nodejs Transform API doesn't have any built-in concept of one-sided backpressure, it basically assumes the input and output are of a similar size and passes through all backpressure upstream.
from node-csv.
In case you're still looking for a solution here, I wrote my own back pressure aware transform stream. You can find it here:
And here's an example of it being used:
I'm thinking about making this a separate NPM package.
from node-csv.
Related Issues (20)
- csv-parse cross pollinates configuration between instances HOT 2
- Delimiter Discovery HOT 12
- YYYY-01-01 getting parsed as the previous year HOT 1
- a solution to repair rows when using columns HOT 1
- Link to Pipe recipe in docs is broken. HOT 2
- CSV Parsing fail when extra cell with no column HOT 1
- Function parse returns "any" - Could it be made generic? HOT 4
- Error message when using vite
- Option `to` and `to_line` results in `ERR_STREAM_PREMATURE_CLOSE` HOT 4
- When `bom` and `skipRecordsWithError` skip event is not raised for skipped records HOT 7
- How can I convert json object with array to csv? HOT 1
- Date parse bug HOT 2
- Parse dot notation columns into nested objects HOT 4
- CSV Parse breaks on comment characters that are also in rows HOT 4
- csv-parse fails to parse very large CSV files
- User-defined value generation HOT 5
- support headers as comment HOT 3
- Support parsing of quote characters within quotes HOT 2
- Ignore comment lines for the from_line value
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from node-csv.