benyap / firestore-backfire Goto Github PK
View Code? Open in Web Editor NEWUltimate control over importing and exporting your Firestore data
Home Page: https://www.npmjs.com/package/firestore-backfire
License: MIT License
Ultimate control over importing and exporting your Firestore data
Home Page: https://www.npmjs.com/package/firestore-backfire
License: MIT License
This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.
These updates are currently rate-limited. Click on a checkbox below to force their creation now.
@commitlint/cli
, @commitlint/config-conventional
)@commitlint/cli
, @commitlint/config-conventional
)These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
@aws-sdk/client-s3
, @aws-sdk/credential-provider-ini
, @aws-sdk/types
)@swc/cli
, @swc/core
).github/workflows/publish.yml
actions/checkout v3
actions/setup-node v3
pnpm/action-setup v2.4.0
actions/cache v3
actions/github-script v6
.github/workflows/stale.yml
actions/stale v8
package.json
ansi-colors 4.1.3
commander 11.0.0
cosmiconfig 8.2.0
exceptional-errors 0.4.4
regenerator-runtime 0.14.0
@aws-sdk/client-s3 3.388.0
@aws-sdk/credential-provider-ini 3.388.0
@aws-sdk/types 3.387.0
@commitlint/cli 17.7.1
@commitlint/config-conventional 17.7.0
@faker-js/faker 8.0.2
@google-cloud/firestore 6.7.0
@google-cloud/storage 7.0.1
@release-it/bumper 5.1.0
@release-it/conventional-changelog 7.0.0
@swc/cli 0.1.62
@swc/core 1.3.76
@types/node 20.4.10
@vitest/coverage-c8 0.33.0
concurrently 8.2.0
firebase-tools 12.4.7
husky 8.0.3
prettier 3.0.1
release-it 16.1.5
resolve-tspaths 0.8.15
rimraf 5.0.1
typescript 5.1.6
vite 4.4.9
vitest 0.34.1
@aws-sdk/client-s3 >=3.0.0
@aws-sdk/credential-provider-ini >=3.0.0
@google-cloud/firestore >=6.0.0
@google-cloud/storage >=6.0.0
node >=12
I installed the package globally on Windows 11
npm install -g firestore-backfire @google-cloud/firestore
And I didn't notice that I have to use the full path, so I used a relative path instead
backfire export . -p database -k credentials.json
and
backfire export data -p database -k credentials.json
Where are the data located by default in order to delete it?
I tried importing data back to a test Firestore project to test out importing with this library. The import was taking more than an hour so I stopped it. I tried again several times with the same result. How long would a 88mb file take to import? I used this:
backfire import hm.ndjson --paths stories -K testCredentials.json -P test-hm-bq0gl2 --verbose true
When I look at the logs, it stops at a file. I stopped the import and then restarted it again. It didn't overwrite the documents I exported already and it continued to export new documents, but then it stopped again:
I deleted the collection from the project and tried again from scratch with the same results.
When I deleted the collection from Firestore, it deleted a little more than 9000 documents. I think the data I was trying to import has more than 10k documents.
Here are the Firestore rules(just in case):
rules_version = '2';
service cloud.firestore {
match /databases/{database}/documents {
match /{document=**} {
allow read, write;
}
}
}
Hi, I'm trying to automatically export a db and make it available publicly, so I tried to add this to github actions but it failed with below error.
npm ERR! could not determine executable to run
this is what I used, any reason why it shouldn't work?
name: Backup Firestore
on: [push]
jobs:
backup:
runs-on: ubuntu-latest
steps:
- name: Create secret files
run: |
echo ${{ secrets.GCP_SA_KEY }} | base64 -d > gcp.json
echo ${{ secrets.FB_SA_KEY }} | base64 -d > fb.json
- name: Backup firestore db in json format
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
run: npx backfire export gs://nostrdb-backups --gcpProject nostrdirectory --gcpKeyFile gcp.json --paths twitter --project nostrdirectory --keyFile fb.json
I installed the library globally using:
npm install -g firestore-backfire @google-cloud/firestore
I tried exporting a collections using:
backfire export ./hm -P HistoryMaps -K credentials.json --paths stories
note:
I got a permission denied with the following error message:
code: 7,
details: 'Permission denied on resource project HistoryMaps.',
metadata: Metadata {
internalRepr: Map(3) {
'google.rpc.errorinfo-bin' => [Array],
'google.rpc.help-bin' => [Array],
'grpc-status-details-bin' => [Array]
},
options: {}
},
statusDetails: [
Help { links: [Array] },
ErrorInfo {
metadata: [Object],
reason: 'CONSUMER_INVALID',
domain: 'googleapis.com'
}
],
reason: 'CONSUMER_INVALID',
domain: 'googleapis.com',
errorInfoMetadata: {
service: 'firestore.googleapis.com',
consumer: 'projects/HistoryMaps'
}
}
Is it possible to use only project name and ApiKey to access the data?
Hey! First of all thanks for this lib.
I'm trying to upload my backup to DigitalOcean Spaces. It claims to be compatible with AWS S3. I was wondering if s3 writer could be able to do the operations with DigitalOcean address. But seems like the matcher defaults to local. If not possible to just reuse s3 writer, is there a way to extend it?
Problem statement
Exporting large collections and deeply nested documents can be quite slow as all paths are explored. Though this is good in some cases, it can be wasteful (lots of Firestore reads) and time consuming if you know you just want to export one or a few particular documents.
Enhancement
To add options to export exact document or collection paths. This will be helpful in restricting how deep or wide the program will "explore" to look for documents. Must be different from the patterns
options as that option is more for filtering through paths it's given, rather than actually making decisions on which paths to explore deeper.
Getting the following error while exporting the data from Firestore collection
backfire 1:58:02 PM error Error: 9 FAILED_PRECONDITION: The requested snapshot version is too old.
at callErrorFromStatus (/usr/local/lib/node_modules/@google-cloud/firestore/node_modules/@grpc/grpc-js/build/src/call.js:31:19)
at Object.onReceiveStatus (/usr/local/lib/node_modules/@google-cloud/firestore/node_modules/@grpc/grpc-js/build/src/client.js:351:73)
at Object.onReceiveStatus (/usr/local/lib/node_modules/@google-cloud/firestore/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:323:181)
at /usr/local/lib/node_modules/@google-cloud/firestore/node_modules/@grpc/grpc-js/build/src/resolving-call.js:94:78
at processTicksAndRejections (node:internal/process/task_queues:78:11)
for call at
at ServiceClientImpl.makeServerStreamRequest (/usr/local/lib/node_modules/@google-cloud/firestore/node_modules/@grpc/grpc-js/build/src/client.js:334:34)
at ServiceClientImpl.<anonymous> (/usr/local/lib/node_modules/@google-cloud/firestore/node_modules/@grpc/grpc-js/build/src/make-client.js:105:19)
at /usr/local/lib/node_modules/@google-cloud/firestore/build/src/v1/firestore_client.js:225:29
at /usr/local/lib/node_modules/@google-cloud/firestore/node_modules/google-gax/build/src/streamingCalls/streamingApiCaller.js:38:28
at /usr/local/lib/node_modules/@google-cloud/firestore/node_modules/google-gax/build/src/normalCalls/timeout.js:44:16
at Object.request (/usr/local/lib/node_modules/@google-cloud/firestore/node_modules/google-gax/build/src/streamingCalls/streaming.js:130:40)
at makeRequest (/usr/local/lib/node_modules/@google-cloud/firestore/node_modules/retry-request/index.js:141:28)
at retryRequest (/usr/local/lib/node_modules/@google-cloud/firestore/node_modules/retry-request/index.js:109:5)
at StreamProxy.setStream (/usr/local/lib/node_modules/@google-cloud/firestore/node_modules/google-gax/build/src/streamingCalls/streaming.js:121:37)
at StreamingApiCaller.call (/usr/local/lib/node_modules/@google-cloud/firestore/node_modules/google-gax/build/src/streamingCalls/streamingApiCaller.js:54:16) {
code: 9,
details: 'The requested snapshot version is too old.',
metadata: Metadata { internalRepr: Map(0) {}, options: {} }
How to solve this issue?
I have ~17,00,000 documents in a collection. The error is after exporting ~12,00,000 documents.
Hi,
Thanks for the great library,
I have the error in the title while tying to implemlent, here's our code:
const result = await exportFirestoreData( { project: firebaseProjectID, credentials, }, { path:
gs://..., gcpProject: "...", gcpCredentials: { type: "service_account", ... }, } );
Thanks,
ThΓ©o
In firestore, documents and collections can exist even if ancestors higher in the path do not (see https://cloud.google.com/firestore/docs/using-console#non-existent_ancestor_documents )
It seems that firestore-backfire does not currently export documents deeper than the non-existing ancestor - is there any way to allow it to do so?
I'm currently using this library: https://www.npmjs.com/package/node-firestore-import-export
It's no longer maintained and it has a flaw that it can't import large datasets.
Will this library do that?
All I want to do is to export my Firestore collection(which contains subcollections), do some data cleaning, and then import it back into Firestore. I'm looking for a simple solution.
Thanks in advance.
Hi benyap,
Firstly, I would like to thank you for version 2.x, it is so better, in the 1.x I was having some problems connecting on the Firebase Emulator, sometimes it connected but sometimes did not, but now it is working fine.
So, now I am finding it is slow to download the documents, when I am using the Firebase Emulator, it downloads about 10 documents and when using the production, it downloads about 40 documents per 5 seconds, I do not know if this slow is because of download document or when saving in the JSON. Do you think there is a way to do it faster? The reason for my question is because my data it big.
It is part of the logs:
export info Export data from demo-backup to backfile.ndjson
export verbose {
options: {
paths: [ 'xxxx/xxxx/xxxx/xxxx' ],
exploreInterval: 10,
exploreChunkSize: 5000,
downloadChunkSize: 5000,
downloadInterval: 10,
overwrite: true,
debug: true,
verbose: true
},
connection: { project: 'demo-backup', emulator: 'localhost:8180' }
}
export debug Overwriting existing data at backfile.ndjson
export debug Progress: 0 exported, 0 exploring, 1 to explore
export verbose Exploring for subcollections in 1 document
export verbose Exporting 1 document
export debug Progress: 0 exported, 1 exploring, 0 to explore
export verbose Found 3 subcollections in xxxx/xxxx/xxxx/xxxx
export verbose Exploring for documents in 3 collections
export verbose Found 1 document in xxxx/xxxx/xxxx/xxxx/apps
export verbose Exporting 1 document
export debug Progress: 2 exported, 2 exploring, 1 to explore
export verbose Found 10163 documents in xxxx/xxxx/xxxx/xxxx/xxxx
export verbose Exporting 5000 documents
export verbose Found 24 documents in xxxx/xxxx/xxxx/xxxx/xxxx
export verbose Exploring for subcollections in 5000 documents
export verbose Exporting 5000 documents
export debug Progress: 5002 exported, 5000 exploring, 5188 to explore
export verbose Found 2 subcollections in xxxx/xxxx/xxxx/xxxx/xxxx/xxxx
export verbose Exporting 187 documents
export debug Progress: 10189 exported, 4990 exploring, 5190 to explore
export debug Progress: 10189 exported, 4980 exploring, 5190 to explore
export debug Progress: 10189 exported, 4975 exploring, 5190 to explore
export debug Progress: 10189 exported, 4966 exploring, 5190 to explore
export debug Progress: 10189 exported, 4955 exploring, 5190 to explore
export debug Progress: 10189 exported, 4953 exploring, 5190 to explore
export debug Progress: 10189 exported, 4942 exploring, 5190 to explore
export debug Progress: 10189 exported, 4932 exploring, 5190 to explore
export debug Progress: 10189 exported, 4929 exploring, 5190 to explore
export debug Progress: 10189 exported, 4918 exploring, 5190 to explore
export debug Progress: 10189 exported, 4909 exploring, 5190 to explore
export debug Progress: 10189 exported, 4905 exploring, 5190 to explore
export debug Progress: 10189 exported, 4894 exploring, 5190 to explore
export debug Progress: 10189 exported, 4885 exploring, 5190 to explore
export debug Progress: 10189 exported, 4883 exploring, 5190 to explore
export debug Progress: 10189 exported, 4873 exploring, 5190 to explore
export debug Progress: 10189 exported, 4864 exploring, 5190 to explore
export debug Progress: 10189 exported, 4861 exploring, 5190 to explore
export debug Progress: 10189 exported, 4851 exploring, 5190 to explore
export debug Progress: 10189 exported, 4841 exploring, 5190 to explore
export debug Progress: 10189 exported, 4839 exploring, 5190 to explore
export debug Progress: 10189 exported, 4831 exploring, 5190 to explore
export debug Progress: 10189 exported, 4821 exploring, 5190 to explore
export debug Progress: 10189 exported, 4815 exploring, 5190 to explore
export debug Progress: 10189 exported, 4809 exploring, 5190 to explore
export debug Progress: 10189 exported, 4799 exploring, 5190 to explore
export debug Progress: 10189 exported, 4792 exploring, 5190 to explore
export debug Progress: 10189 exported, 4788 exploring, 5190 to explore
export debug Progress: 10189 exported, 4778 exploring, 5190 to explore
export debug Progress: 10189 exported, 4769 exploring, 5190 to explore
export debug Progress: 10189 exported, 4766 exploring, 5190 to explore
export debug Progress: 10189 exported, 4756 exploring, 5190 to explore
export debug Progress: 10189 exported, 4747 exploring, 5190 to explore
export debug Progress: 10189 exported, 4744 exploring, 5190 to explore
I exported my Firestore collection (88mb). Since I'm not familiar with the data structure of this export, I used JSON.parse() to find it.
import * as fs from "fs"
const filePath = JSON.parse(fs.readFileSync("../../data/hm.ndjson"))
const readStream = fs.createReadStream(filePath, { encoding: "utf8" })
readStream.on("data", (chunk) => {
const lines = chunk.split("\n")
lines.forEach((line) => {
if (line.trim() !== "") {
try {
const jsonObject = JSON.parse(line)
// Analyze the structure of `jsonObject` here
console.log(jsonObject)
} catch (error) {
console.error("Error parsing line:", line)
}
}
})
})
readStream.on("end", () => {
console.log("Finished reading the NDJSON file.")
})
readStream.on("error", (error) => {
console.error("Error reading the file:", error)
})
It gave this error:
SyntaxError: Unexpected non-whitespace character after JSON at position 20441
at JSON.parse ()
at file:///Users/nonoumasy/Downloads/node/transform/findJSONStructure.js:2:23
at ModuleJob.run (node:internal/modules/esm/module_job:194:25)
I looked into the issue, and it seems to have issue with the _nanoseconds property. I'm not sure why π€·ββοΈ.
Also VScode was unable to prettify the NDJSON file because the filesizes was too big, any suggestions on how to view it?
I've like to use your API with local files but your documentation does not explain how to do this.
Its unclear from your code what is required for DefaultOptions & T
Note that my paths are computed so Typescript can't see the id
prefix in case that's used.
const writer = await dataSourceFactory.createWriter(exportFilePath, ???)
const reader = await dataSourceFactory.createReader(importFilePath, ???)
Same issue for the other pre-registered readers and writers.
The following import results in a "exportAction is not a function" error.
import { exportAction } from "@benyap/backfire";
Current workaround is to export directly from the dist
directory like so:
import { exportAction } from "@benyap/backfire/dist";
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.