Giter Club home page Giter Club logo

firestore-rs's Introduction

Cargo tests and formatting security audit

Firestore for Rust

Library provides a simple API for Google Firestore based on the official gRPC API:

  • Create or update documents using Rust structures and Serde;
  • Support for:
    • Querying/streaming docs/objects;
    • Listing documents/objects (and auto pages scrolling support);
    • Listening changes from Firestore;
    • Transactions;
    • Aggregated Queries;
    • Streaming batch writes with automatic throttling to avoid time limits from Firestore;
    • K-nearest neighbor (KNN) vector search;
    • Explaining queries;
  • Fluent high-level and strongly typed API;
  • Full async based on Tokio runtime;
  • Macro that helps you use JSON paths as references to your structure fields;
  • Implements own Serde serializer to Firestore protobuf values;
  • Support for multiple database IDs
  • Supports for extended datatypes:
    • Firestore timestamp with #[serde(with)] and a specialized structure
    • Lat/Lng
    • References
  • Caching support for collections and documents:
    • In-memory cache;
    • Persistent cache;
  • Google client based on gcloud-sdk library that automatically detects GCE environment or application default accounts for local development;

Quick start

Cargo.toml:

[dependencies]
firestore = "0.41"

Examples

All examples available in the examples directory.

To run an example with environment variables:

PROJECT_ID=<your-google-project-id> cargo run --example crud

Firestore database client instance and lifecycle

To create a new instance of Firestore client you need to provide at least a GCP project ID. It is not recommended creating a new client for each request, so it is recommended to create a client once and reuse it whenever possible. Cloning instances is much cheaper than creating a new one.

The client is created using the Firestore::new method:

use firestore::*;

// Create an instance
let db = FirestoreDb::new( & config_env_var("PROJECT_ID") ? ).await?;

This is the recommended way to create a new instance of the client, since it automatically detects the environment and uses credentials, service accounts, Workload Identity on GCP, etc. Look at the section below Google authentication for more details.

In cases if you need to create a new instance explicitly specifying a key file, you can use:

FirestoreDb::with_options_service_account_key_file(
  FirestoreDbOptions::new(config_env_var("PROJECT_ID") ?.to_string()),
  "/tmp/key.json".into()
).await?

or if you need even more flexibility you can use a preconfigured token source and scopes with:

FirestoreDb::with_options_token_source(
  FirestoreDbOptions::new(config_env_var("PROJECT_ID") ?.to_string()),
  gcloud_sdk::GCP_DEFAULT_SCOPES.clone(),
  gcloud_sdk::TokenSourceType::File("/tmp/key.json".into())
).await?

Firebase supports multiple databases per project now, so you can specify the database ID in the options:

FirestoreDb::with_options(
  FirestoreDbOptions::new("your-project-id".to_string())
    .with_database_id("your-database-id".to_string())
  )
.await?

Fluent API

The library provides two APIs:

  • Fluent API: To simplify development and developer experience the library provides more high level API starting with v0.12.x. This is the recommended API for all applications to use.
  • Classic and low level API: the API existing before 0.12 is still available and not deprecated, so it is fine to continue to use when needed. Furthermore the Fluent API is based on the same classic API and generally speaking are something like smart and convenient constructors. The API can be changed with introducing incompatible changes so it is not recommended to use in long term.
use firestore::*;

const TEST_COLLECTION_NAME: &'static str = "test";

let my_struct = MyTestStructure {
  some_id: "test-1".to_string(),
  some_string: "Test".to_string(),
  one_more_string: "Test2".to_string(),
  some_num: 42,
};

// Create
let object_returned: MyTestStructure = db.fluent()
  .insert()
  .into(TEST_COLLECTION_NAME)
  .document_id( & my_struct.some_id)
  .object( & my_struct)
  .execute()
  .await?;

// Update or Create 
// (Firestore supports creating documents with update if you provide the document ID).
let object_updated: MyTestStructure = db.fluent()
  .update()
  .fields(paths!(MyTestStructure::{some_num, one_more_string}))
  .in_col(TEST_COLLECTION_NAME)
  .document_id( & my_struct.some_id)
  .object( & MyTestStructure {
      some_num: my_struct.some_num + 1,
      one_more_string: "updated-value".to_string(),
        ..my_struct.clone()
   })
  .execute()
  .await?;

// Get object by id
let find_it_again: Option<MyTestStructure> = db.fluent()
  .select()
  .by_id_in(TEST_COLLECTION_NAME)
  .obj()
  .one( & my_struct.some_id)
  .await?;

// Delete data
db.fluent()
  .delete()
  .from(TEST_COLLECTION_NAME)
  .document_id( & my_struct.some_id)
  .execute()
  .await?;

Querying

The library supports rich querying API with filters, ordering, pagination, etc.

// Query as a stream our data
let object_stream: BoxStream<FirestoreResult<MyTestStructure> > = db.fluent()
  .select()
  .fields(paths!(MyTestStructure::{some_id, some_num, some_string, one_more_string, created_at})) // Optionally select the fields needed
  .from(TEST_COLLECTION_NAME)
  .filter( | q| { // Fluent filter API example
      q.for_all([
        q.field(path! (MyTestStructure::some_num)).is_not_null(),
        q.field(path ! (MyTestStructure::some_string)).eq("Test"),
        // Sometimes you have optional filters
        Some("Test2")
          .and_then( | value | q.field(path ! (MyTestStructure::one_more_string)).eq(value)),        
      ])
  })
  .order_by([(
    path!(MyTestStructure::some_num),
    FirestoreQueryDirection::Descending,
  )])
  .obj() // Reading documents as structures using Serde gRPC deserializer
  .stream_query_with_errors()
  .await?;

let as_vec: Vec<MyTestStructure> = object_stream.try_collect().await?;
println!("{:?}", as_vec);

Use:

  • q.for_all for AND conditions
  • q.for_any for OR conditions (Firestore has just recently added support for OR conditions)

You can nest q.for_all/q.for_any.

Get and batch get support

let find_it_again: Option<MyTestStructure> = db.fluent()
  .select()
  .by_id_in(TEST_COLLECTION_NAME)
  .obj()
  .one( & my_struct.some_id)
  .await?;

let object_stream: BoxStream<(String, Option<MyTestStructure>) > = db.fluent()
  .select()
  .by_id_in(TEST_COLLECTION_NAME)
  .obj()
  .batch(vec!["test-0", "test-5"])
  .await?;

Timestamps support

By default, the types such as DateTime serializes as a string to Firestore (while deserialization works from Timestamps and Strings).

To change this behaviour and support Firestore timestamps on database level there are two options:

  • #[serde(with)] and attributes:
#[derive(Debug, Clone, Deserialize, Serialize)]
struct MyTestStructure {
    #[serde(with = "firestore::serialize_as_timestamp")]
    created_at: DateTime<Utc>,

    #[serde(default)]
    #[serde(with = "firestore::serialize_as_optional_timestamp")]
    updated_at: Option<DateTime<Utc>>,
}
  • using a type FirestoreTimestamp:
#[derive(Debug, Clone, Deserialize, Serialize)]
struct MyTestStructure {
    created_at: firestore::FirestoreTimestamp,
    updated_at: Option<firestore::FirestoreTimestamp>
}

This will change it only for firestore serialization, but it still serializes as string to JSON (so you can reuse the same model for JSON and Firestore).

In your queries you need to use the wrapping class firestore::FirestoreTimestamp, for example:

   q.field(path!(MyTestStructure::created_at)).less_than_or_equal(firestore::FirestoreTimestamp(Utc::now()))

Nested collections

You can work with nested collections specifying path/location to a parent for documents:

// Creating a parent doc
db.fluent()
  .insert()
  .into(TEST_PARENT_COLLECTION_NAME)
  .document_id( & parent_struct.some_id)
  .object( & parent_struct)
  .execute()
  .await?;

// The doc path where we store our children
let parent_path = db.parent_path(TEST_PARENT_COLLECTION_NAME, parent_struct.some_id) ?;

// Create a child doc
db.fluent()
  .insert()
  .into(TEST_CHILD_COLLECTION_NAME)
  .document_id( & child_struct.some_id)
  .parent( & parent_path)
  .object( & child_struct)
  .execute()
  .await?;

// Listing children
println!("Listing all children");

let objs_stream: BoxStream<MyChildStructure> = db.fluent()
  .list()
  .from(TEST_CHILD_COLLECTION_NAME)
  .parent( & parent_path)
  .obj()
  .stream_all()
  .await?;

Complete example available here.

You can nest multiple levels of collections using at():

let parent_path =
db.parent_path(TEST_PARENT_COLLECTION_NAME, "parent-id") ?
  .at(TEST_CHILD_COLLECTION_NAME, "child-id") ?
  .at(TEST_GRANDCHILD_COLLECTION_NAME, "grand-child-id") ?;

Transactions

To manage transactions manually you can use db.begin_transaction(), and then the Fluent API to add the operations needed in the transaction.

let mut transaction = db.begin_transaction().await?;

db.fluent()
  .update()
  .fields(paths!(MyTestStructure::{
       some_string
     }))
  .in_col(TEST_COLLECTION_NAME)
  .document_id("test-0")
  .object( & MyTestStructure {
    some_id: format!("test-0"),
    some_string: "UpdatedTest".to_string(),
  })
  .add_to_transaction( & mut transaction) ?;

db.fluent()
  .delete()
  .from(TEST_COLLECTION_NAME)
  .document_id("test-5")
  .add_to_transaction( & mut transaction) ?;

transaction.commit().await?;

You may also execute transactions that automatically retry with exponential backoff using run_transaction.

    db.run_transaction( | db, transaction| {
      Box::pin(async move {
      let mut test_structure: MyTestStructure = db
        .fluent()
        .select()
        .by_id_in(TEST_COLLECTION_NAME)
        .obj()
        .one(TEST_DOCUMENT_ID)
        .await ?
        .expect("Missing document");

      // Perform some kind of operation that depends on the state of the document
      test_structure.test_string += "a";

      db.fluent()
        .update()
        .fields(paths ! (MyTestStructure::{
          test_string
         }))
        .in_col(TEST_COLLECTION_NAME)
        .document_id(TEST_DOCUMENT_ID)
        .object( & test_structure)
        .add_to_transaction(transaction) ?;

        Ok(())
      })
})
  .await?;

See the complete example available here.

Please note that Firestore doesn't support creating documents in the transactions (generating document IDs automatically), so you need to use update() to implicitly create documents and specifying your own IDs.

Reading Firestore document metadata as struct fields

Firestore provides additional generated fields for each of document you create:

  • _firestore_id: Generated document ID (when it is not specified from the client);
  • _firestore_created: The time at which the document was created;
  • _firestore_updated: The time at which the document was last changed;

To be able to read them the library makes them available as system fields for the Serde deserializer with reserved names, so you can specify them in your structures as:

#[derive(Debug, Clone, Deserialize, Serialize)]
struct MyTestStructure {
    #[serde(alias = "_firestore_id")]
    id: Option<String>,
    #[serde(alias = "_firestore_created")]
    created_at: Option<DateTime<Utc>>,
    #[serde(alias = "_firestore_updated")]
    updated_at: Option<DateTime<Utc>>,
    some_string: String,
    one_more_string: String,
    some_num: u64,
}

Complete example available here.

Working on dynamic/document level

Sometimes having static structure may restrict you from working with dynamic data, so there is a way to use Fluent API to work with documents without introducing structures at all.

let object_returned = db
.fluent()
.insert()
.into(TEST_COLLECTION_NAME)
.document_id("test-1")
.document(FirestoreDb::serialize_map_to_doc("",
    [
      ("some_id", "test-id".into()),
      ("some_string", "test-value".into()),
      ("some_num", 42.into()),
      (
      "embedded_obj",
        FirestoreValue::from_map([
          ("inner_some_id", "inner-id-value".into()),
          ("inner_some_string", "inner-some-value".into()),
        ]),
      ),
      ("created_at", FirestoreTimestamp(Utc::now()).into()),
    ])?
)
.execute()
.await?;

Full example available here.

Document transformations

The library supports server side document transformations in transactions and batch writes:

// Only transformation
db.fluent()
.update()
.in_col(TEST_COLLECTION_NAME)
.document_id("test-4")
.transforms( | t| { // Transformations
    t.fields([
      t.field(path! (MyTestStructure::some_num)).increment(10),
      t.field(path ! (MyTestStructure::some_array)).append_missing_elements([4, 5]),
      t.field(path! (MyTestStructure::some_array)).remove_all_from_array([3]),
    ])
})
.only_transform()
.add_to_transaction( & mut transaction) ?; // or add_to_batch

// Update and transform (in this order and atomically):
db.fluent()
.update()
.in_col(TEST_COLLECTION_NAME)
.document_id("test-5")
.object( & my_obj) // Updating the objects with the fields here
.transforms( | t| { // Transformations after the update
    t.fields([
      t.field(path! (MyTestStructure::some_num)).increment(10),
    ])
})
.add_to_transaction( & mut transaction) ?; // or add_to_batch

Listening the document changes on Firestore

To help to work with asynchronous event listener the library supports high level API for listening the events from Firestore on a separate thread:

The listener implementation needs to be provided with a storage for the last received token for specified targets to be able to resume listening the changes from the last handled token and to avoid receiving all previous changes.

The library provides basic implementations for storing the tokens but you can implement your own more sophisticated storage if needed:

  • FirestoreTempFilesListenStateStorage - resume tokens stored as temporary files on local FS;
  • FirestoreMemListenStateStorage - in memory storage backed by HashMap (with this implementation if you restart your app, you will receive all notifications again);
let mut listener = db.create_listener(
    FirestoreTempFilesListenStateStorage::new() // or FirestoreMemListenStateStorage or your own implementation 
).await?;

// Adding query listener
db.fluent()
.select()
.from(TEST_COLLECTION_NAME)
.listen()
.add_target(TEST_TARGET_ID_BY_QUERY, & mut listener) ?;

// Adding docs listener by IDs
db.fluent()
.select()
.by_id_in(TEST_COLLECTION_NAME)
.batch_listen([doc_id1, doc_id2])
.add_target(TEST_TARGET_ID_BY_DOC_IDS, & mut listener) ?;

listener
.start( | event| async move {
    match event {
        FirestoreListenEvent::DocumentChange( ref doc_change) => {
            println ! ("Doc changed: {:?}", doc_change);
            
            if let Some(doc) = & doc_change.document {
              let obj: MyTestStructure =
              FirestoreDb::deserialize_doc_to::<MyTestStructure > (doc)
              .expect("Deserialized object");
              println ! ("As object: {:?}", obj);
            }
        }
        _ => {
          println ! ("Received a listen response event to handle: {:?}", event);
        }
    }

  Ok(())
})
.await?;

// Wait some events like Ctrl-C, signals, etc
// <put-your-implementation-for-wait-here>

// and then shutdown
listener.shutdown().await?;

See complete example in examples directory.

Explicit null value serialization

By default, all Option<> serialized as absent fields, which is convenient for many cases. However sometimes you need to have explicit nulls.

To help with that there are additional attributes implemented for serde(with):

  • For any type:
#[serde(default)]
#[serde(with = "firestore::serialize_as_null")]
test_null: Option<String>,
  • For Firestore timestamps attribute:
#[serde(default)]
#[serde(with = "firestore::serialize_as_null_timestamp")]
test_null: Option<DateTime<Utc> >,

Select aggregate functions

The library supports the aggregation functions for the queries:

db.fluent()
  .select()
  .from(TEST_COLLECTION_NAME)
  .aggregate( | a| a.fields([a.field(path!(MyAggTestStructure::counter)).count()]))
  .obj()
  .query()
  .await?;

Update/delete preconditions

The library supports the preconditions:

  .precondition(FirestoreWritePrecondition::Exists(true))

Explaining the query

The library supports the query explanation:

db.fluent()
  .select()
  .from(TEST_COLLECTION_NAME)
  .explain()
  // or use explain_with_options if you want to provide additional options like analyze which run query to gather additional statistics 
  // .explain_with_options(FirestoreExplainOptions::new().with_analyze(true))
  .stream_query_with_metadata()
  .await?;

Google authentication

Looks for credentials in the following places, preferring the first location found:

  • A JSON file whose path is specified by the GOOGLE_APPLICATION_CREDENTIALS environment variable.
  • A JSON file in a location known to the gcloud command-line tool using gcloud auth application-default login.
  • On Google Compute Engine, it fetches credentials from the metadata server.

Local development

Don't confuse gcloud auth login with gcloud auth application-default login for local development, since the first authorize only gcloud tool to access the Cloud Platform.

The latter obtains user access credentials via a web flow and puts them in the well-known location for Application Default Credentials (ADC). This command is useful when you are developing code that would normally use a service account but need to run the code in a local development environment where it's easier to provide user credentials. So to work for local development you need to use gcloud auth application-default login.

Working with docker images

When you design your Dockerfile make sure you either installed Root CA certificates or use base images that already include them. If you don't have certs installed you usually observe the errors such as:

SystemError(FirestoreSystemError { public: FirestoreErrorPublicGenericDetails { code: "GrpcStatus(tonic::transport::Error(Transport, hyper::Error(Connect, Custom { kind: InvalidData, error: InvalidCertificateData(\"invalid peer certificate: UnknownIssuer\") })))" }, message: "GCloud system error: Tonic/gRPC error: transport error" })

For example for Debian based images, this usually can be fixed using this package:

RUN apt-get install -y ca-certificates

Also, I recommend considering using Google Distroless images since they are secure, already include Root CA certs, and are optimised for size.

Firestore emulator

To work with the Google Firestore emulator you can use the environment variable:

export FIRESTORE_EMULATOR_HOST="localhost:8080"

or specify it as an option using FirestoreDb::with_options()

Caching

The library supports caching for collections and documents. Caching is leveraging the Firestore listener to update the cache when the document is changed, that means the updates will be propagated across distributed instances automatically for you.

This is useful to avoid reading and paying for the same documents from Firestore multiple times. Especially for some data such as dictionaries, configuration, and other information that is not changed frequently. In fact, this may be really helpful to reduce both costs and latency in your applications.

Caching works on the document level. The cache will be used for the following operations:

  • Reading documents by IDs (get and batch get);
  • Listing all documents in a collection;
  • Partial support for querying documents in a collection:
    • Filtering;
    • Ordering;
    • Paging/Cursors;

(Caching other operations may be extended in the future).

The library provides two implementations of the cache:

  • In-memory cache, implemented using moka cache library;
  • Persistent cache, implemented using redb and protobuf;

Caching is opt-in and you need to enable it when needed using cargo features:

  • caching-memory for in-memory cache;
  • caching-persistent for persistent/disk-backed cache;

Load modes

Caching supports different init/load modes:

  • PreloadNone: Don't preload anything, just fill in the cache while working;
  • PreloadAllDocs: Preload all documents from the collection to the cache;
  • PreloadAllIfEmpty: Preload all documents from the collection to the cache only if the cache is empty (this is only useful for persistent cache, for memory cache it is the same as PreloadAllDocs);

How a cache is updated

Update cache is done in the following cases:

  • When you read a document through a cache by ID and it is not found in the cache, it will be loaded from Firestore and cached;
  • Firestore listener will update the cache when it receives a notification about the document change (externally or from your app);
  • Using Preloads at the startup time;

Usage

// Create an instance
let db = FirestoreDb::new( & config_env_var("PROJECT_ID") ? ).await?;

const TEST_COLLECTION_NAME: &'static str = "test-caching";

// Create a cache instance that also creates an internal Firestore listener
let mut cache = FirestoreCache::new(
"example-mem-cache".into(),
& db,
FirestoreMemoryCacheBackend::new(
  FirestoreCacheConfiguration::new().add_collection_config(
    & db,
    FirestoreCacheCollectionConfiguration::new(
      TEST_COLLECTION_NAME,
      FirestoreListenerTarget::new(1000),
      FirestoreCacheCollectionLoadMode::PreloadNone,
    )
  ),
) ?,
  FirestoreMemListenStateStorage::new(),
)
.await?;

// Load and init cache
cache.load().await?; // Required even if you don't preload anything

// Read a document through the cache. If it is not found in the cache, it will be loaded from Firestore and cached.
let my_struct0: Option<MyTestStructure> = db.read_through_cache( & cache)
.fluent()
.select()
.by_id_in(TEST_COLLECTION_NAME)
.obj()
.one("test-1")
.await?;

// Read a document only from the cache. If it is not found in the cache, it will return None.
let my_struct0: Option<MyTestStructure> = db.read_cached_only( & cache)
.fluent()
.select()
.by_id_in(TEST_COLLECTION_NAME)
.obj()
.one("test-1")
.await?;

Full examples available here and here.

How this library is tested

There are integration tests in the tests directory that runs for every commit against the real Firestore instance allocated for testing purposes. Be aware not to introduce huge document reads/updates and collection isolation from other tests.

Licence

Apache Software License (ASL)

Author

Abdulla Abdurakhmanov

firestore-rs's People

Contributors

abdolence avatar ajw221 avatar anna-hope avatar bouzuya avatar chanced avatar emanon001 avatar heytdep avatar jokil123 avatar nickcaplinger avatar pierd avatar renovate[bot] avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

firestore-rs's Issues

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

cargo
Cargo.toml
  • tracing 0.1
  • gcloud-sdk 0.24.5
  • hyper 0.14
  • struct-path 0.2
  • rvstruct 0.3.2
  • rsb_derive 0.5
  • serde 1.0
  • tokio 1
  • tokio-stream 0.1
  • futures 0.3
  • chrono 0.4
  • async-trait 0.1
  • hex 0.4
  • backoff 0.4
  • redb 2.1
  • moka 0.12
  • cargo-husky 1.5
  • tracing-subscriber 0.3
  • tokio 1
  • tempfile 3
  • approx 0.5
github-actions
.github/workflows/security-audit.yml
  • actions/checkout v4
.github/workflows/tests.yml
  • actions/checkout v4
  • google-github-actions/auth v2
  • google-github-actions/setup-gcloud v2

  • Check this box to trigger a request for Renovate to run again on this repository

FIRESTORE_EMULATOR_HOST must have a scheme

The documentation for Firestore suggests setting the FIRESTORE_EMULATOR_HOST to localhost:[PORT] (e.g. localhost:8080) but doing so results in the following error:

SystemError(
    FirestoreSystemError {
        public: FirestoreErrorPublicGenericDetails {
            code: "GrpcStatus(tonic::transport::Error(Transport, hyper::Error(Connect, \"invalid URL, scheme is missing\")))" 
        }, 
        message: "GCloud system error: Tonic/gRPC error: transport error" 
    }
)

Possibly unnecessary `Sync`

The following code:

use std::io::Read;

use firestore::{FirestoreDb, FirestoreListenerTarget};
use tower_controller_rs::temp_file_token_storage::TempFileTokenStorage;

#[tokio::main]
async fn main() {
    let db = FirestoreDb::new("my-project-id").await.unwrap();

    let mut listener = db.create_listener(TempFileTokenStorage).await.unwrap();

    db.fluent()
        .select()
        .from("my-col")
        .listen()
        .add_target(FirestoreListenerTarget::new(1), &mut listener)
        .unwrap();

    listener
        .start(move |r| {
            let db = db.clone();
            async move {
                db.fluent()
                    .select()
                    .by_id_in("mycol")
                    .one("mydoc")
                    .await
                    .unwrap();
                Ok(())
            }
        })
        .await
        .unwrap();

    std::io::stdin().read(&mut [1]).unwrap();

    listener.shutdown().await.unwrap();
}

Fails to compile with the following error:

error: future cannot be shared between threads safely
   --> src\bin\error_example.rs:20:10
    |
20  |         .start(move |r| {
    |          ^^^^^ future created by async block is not `Sync`
    |
    = help: the trait `Sync` is not implemented for `dyn Future<Output = Result<gcloud_sdk::apis::google::firestore::v1::Document, FirestoreError>> + Send`
note: future is not `Sync` as this value is used across an await
   --> D:\Repos\firestore-rs\src\fluent_api\select_builder.rs:330:17
    |
322 |               match self
    |  ___________________-
323 | |                 .db
324 | |                 .get_doc_at::<S>(
325 | |                     parent.as_str(),
...   |
328 | |                     self.return_only_fields,
329 | |                 )
    | |_________________- has type `Pin<Box<dyn Future<Output = Result<gcloud_sdk::apis::google::firestore::v1::Document, FirestoreError>> + Send>>` which is not `Sync`
330 |                   .await
    |                   ^^^^^^ await occurs here, with the value maybe used later
...
338 |           } else {
    |           - the value is later dropped here
note: required by a bound in `FirestoreListener::<D, S>::start`
   --> D:\Repos\firestore-rs\src\db\listen_changes.rs:231:57
    |
231 |         F: Future<Output = BoxedErrResult<()>> + Send + Sync + 'static,
    |                                                         ^^^^ required by this bound in `FirestoreListener::<D, S>::start`

This is due to the start function requiring a Sync future and the fluent api not returning a future with Sync. It is possible to remove this requirement from the start function (and a function it calls) without it causing an error. Not guaranteed everything still works. I will open a draft pull request (#71 ).

LatLng not supported yet

when trying to read a db document I get the following error:

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: DeserializeError(FirestoreSerializationError { public: FirestoreErrorPublicGenericDetails { code: "LatLng not supported yet" } })'

it appears to be due to a GeoPoint field on my document
image

the code to reproduce (I assume you just need to request a doc with a GeoPoint value):

use firestore::*;
use serde::{Deserialize, Serialize};

#[tokio::main]
async fn main() {
    dotenv::dotenv().ok();
    let project_id: String = std::env::var("PROJECT_ID").unwrap();
    let db = FirestoreDb::new(project_id).await.unwrap();

    let t: Option<Test> = db
        .fluent()
        .select()
        .by_id_in("towers")
        .obj()
        .one("5aQQXeYkP0xfW3FJxjH0")
        .await
        .unwrap();
}

#[derive(Debug, Clone, Serialize, Deserialize)]
struct Test {}

I can confirm that my code is able to request documents without geopoints. How could I could I get this to work for documents with geopoints (aside from changing my db structure)?

Token

How can i find token?
I try create a document but it return error
"error": "13 INTERNAL: Firestore system/internal error: GCloud system error: token source error: not found token source"

Is it possible lo in with GOOGLE_APPLICATION_CREDENTIALS admins dk .json??

Can I get `Vec<Result<T,DecodeFailure>>` from querying methods instead of `Result<Vec<T>,FirestoreError>`?

For example,

let items = firestore_db
        .query_obj::<Item>(params)
        .await

query_obj returns Result<Vec<Item>,FirestoreError> in "all or nothing" way.
That is, if any of Items raises decode error, it returns Err(FirestoreError).

I think it would be better for library users to have more control over error handling(e.g. ignore or recover from error).

Therefore, I want Vec<Result<T,FirestoreError>> from querying methods instead of Result<Vec<T>,FirestoreError>.

In addition, I am happy if I can get raw FirestoreValue from querying methods and manually decode items one by one.
Is there any method or api like that?

let raw_values: Vec<FirestoreValue> = firestore_db.query_obj::<???>(params).await;
let it : Vec<Result<T,Error>> =  raw_values.iter().map(|value| /* try decode raw value to `T` here */ )

firestore::db::listen_changes raises EOF error from hyper::proto::h2::client pretty often?

Is this behavior expected?
I get this message probably once every 1-2 minutes while my listeners are running:

2023-03-12T22:12:51.631540Z DEBUG Connection{peer=Client}: h2::codec::framed_write: send frame=Ping { ack: false, payload: [59, 124, 219, 122, 11, 135, 22, 180] }
2023-03-12T22:12:51.645119Z DEBUG Connection{peer=Client}: h2::codec::framed_read: received frame=GoAway { error_code: ENHANCE_YOUR_CALM, last_stream_id: StreamId(1), debug_dat
a: b"too_many_pings" }
2023-03-12T22:12:51.645198Z DEBUG Connection{peer=Client}: h2::proto::connection: Connection::poll; IO error error=UnexpectedEof
2023-03-12T22:12:51.645344Z DEBUG hyper::proto::h2::client: connection error: unexpected end of file
2023-03-12T22:12:51.645407Z DEBUG hyper::proto::h2::client: client request body error: error writing a body to connection: send stream capacity unexpectedly closed
2023-03-12T22:12:51.645467Z DEBUG tonic::codec::decode: decoder inner stream error: Status { code: Unknown, message: "error reading a body from connection: unexpected end of fi
le", source: Some(hyper::Error(Body, Error { kind: Io(Kind(UnexpectedEof)) })) }
2023-03-12T22:12:51.645513Z DEBUG firestore::db::listen_changes: Listen error occurred DatabaseError(FirestoreDatabaseError { public: FirestoreErrorPublicGenericDetails { code:
 "Unknown" }, details: "Hyper error: error reading a body from connection: unexpected end of file", retry_possible: false }). Restarting in 5s...
2023-03-12T22:12:51.645560Z DEBUG hyper::client::service: connection error: hyper::Error(Io, Kind(UnexpectedEof))

I'm not really sure if this is a bug or if I'm doing something wrong. The listeners still works fine, it just has to restart for 5 seconds every 1-2 min

Issue with paths macro and serde rename

Updating serde renamed fields with the paths! macro does not work. In the below example struct fields are renamed to myValue and myOtherValue, trying to update these fields does not work (no changes) and using paths!(MyStruct::myValue) returns a compile error.

use dotenv::dotenv;
use firestore::{
    struct_path::{path, paths},
    FirestoreDb,
};
use serde::{Deserialize, Serialize};

#[tokio::main]
async fn main() {
    dotenv().ok();

    let db = FirestoreDb::new("fahrradturm").await.unwrap();

    let changed_doc = db
        .fluent()
        .update()
        .fields(paths!(MyStruct::my_value))
        .in_col("myCollection")
        .document_id("myDocument")
        .object(&MyStruct {
            my_value: 42,
            ..Default::default()
        })
        .execute::<MyStruct>()
        .await
        .unwrap();

    println!("{:?}", changed_doc);
}

#[derive(Debug, Serialize, Deserialize, Default)]
struct MyStruct {
    #[serde(rename = "myValue")]
    my_value: i32,
    #[serde(rename = "myOtherValue")]
    my_other_value: i32,
}

Writing reference type fields

ยฟIs it possible to store values of type reference? Iยดve been trying for a couple of days (while getting familiar with the crate), but all I can get is:

  • Writing a string instead
  • Getting a compiler error when trying to write a -non-serializable- ValueType::ReferenceValue containing the string version of the doc reference.

image

THNX in advance!

Failed to initialize FirestoreDb: "invalid peer certificate: UnknownIssuer"

Hi, I was able to test my application locally by specifying the path to the 'secret.json' as 'GOOGLE_APPLICATION_CREDENTIALS' environment variable. However, when I deployed my application to GCP Cloud Run and specified the 'GOOGLE_APPLICATION_CREDENTIALS' environment variable to point to the path containing 'secret.json', I encountered the following error:

Failed to initialize FirestoreDb: SystemError(FirestoreSystemError { public: FirestoreErrorPublicGenericDetails { code: "GrpcStatus(tonic::transport::Error(Transport, hyper::Error(Connect, Custom { kind: InvalidData, error: InvalidCertificateData(\"invalid peer certificate: UnknownIssuer\") })))" }, message: "GCloud system error: Tonic/gRPC error: transport error" })

I have verified that 'secret.json' is present in the specified path on Cloud Run, so I'm not sure why this error is occurring. Has anyone else encountered a similar issue? Any insights or suggestions would be greatly appreciated.

FirestoreListenerTarget type with a u32 type will overflow. When cast to u16 and then u32 it works. I think we need it to be u16

Hey, I know you just closed this issue.
Thanks for fixing that so quickly you sir are a rockstar!
I upgraded to your latest version and tested some changes to my API app.
Unfortunately, there's a new problem with the u32 overflowing:

I would suggest simply using a u16 for the interior FirestoreListenerTarget type instead.

Here's a copy of the error trace:

2023-03-12T21:27:08.530169Z ERROR firestore::db::listen_changes: Listen error InvalidParametersError(FirestoreInvalidParametersError { public: Fi
restoreInvalidParametersPublicDetails { field: "Invalid target ID: 3104448471 out of range integral type conversion attempted", error: "target_id
" } }). Exiting...

I can also confirm that when the uint is cast to u16 first the fluent listener API works flawlessly.

Let me know if you need more info?

Need more error information - batch documents as a stream

We recently upgraded an application in production that was using Firestore and now we are seeing errors like the below in the log. Is there a particular method that would cause this issue? It is very difficult to troubleshoot without additional information or context.

err: "Invalid serialization: FirestoreErrorPublicGenericDetails { code: "SerializationError" }"
message: "Error occurred while consuming batch documents as a stream."

Partial document update with dynamically assigned fields

Hey @abdolence ,

thank you for this library! I have a question regarding the update of documents and I'd appreciate your help since I'm pretty new to Rust. I'm using Axum as a server and to receive some data from a web frontend.

I'd like to update certain fields of a document. The tricky part is that these fields should be assigned dynamically. Meaning, prior to calling the function I can not know which the updated fields are going to be exactly. I'm having a bit of a hard time defining the type of the struct for the object() function, as well as defining the paths in field(paths!(...)).

For example, given the following struct:

struct User {
username: String,
email: String,
city: String,
occupation: String
}

I'd like to be able to update only some fields of a User document. Which fields exactly these are going to be, is up to the user input in the front end of the application.

Could you give me insights about how I could achieve such a "dynamic" update?

Thank you.

Live data update support?

Quick question does this library support live-data-update from Firestore (eg in Kotlin it's a Stateflow) where the variable is updated if the value in Firestore updates.
If no then ignore the rest of the issue.
if yes how exactly is this being done, I'm making a firebase-sdk for Rust and I'm not sure how to support live-data-updates

Retrieve item from a "grandchild" collection

Hey @abdolence,

I'm once again in need of assistance. As the title says, I'd like to retrieve an item from a collection which has a parent and a grandparent collection. The structure is like this:

waves (top collection)/
โ””โ”€โ”€ user_doc1/
    โ”œโ”€โ”€ uid (field)
    โ””โ”€โ”€ personal_waves (subcollection)/
        โ””โ”€โ”€ wave_doc1/
            โ”œโ”€โ”€ title (field)
            โ””โ”€โ”€ reactions (subcollection)/
                โ”œโ”€โ”€ reaction_doc1
                โ””โ”€โ”€ reaction_doc2

I need to access the items of the reactions subcollection which is a child of personal_waves, which in turn is a child of waves.

I tried chaining the parent() method like this but it did not produce a desired result:

    // wave_performer_uid and wave_id are function parameters

    let waves_collection_path = db
        .parent_path("waves", wave_performer_uid)
        .expect("Waves collection could not be retrieved");

    let personal_waves_subcollection_path = db
        .parent_path("personal_waves", wave_id)
        .expect("Personal waves subcollection could not be retrieved");

    let reaction_stream: BoxStream<FirestoreReaction> = db
        .fluent()
        .list()
        .from("reactions")
        .parent(&personal_waves_subcollection_path)
        .parent(&waves_collection_path)
        .obj()
        .stream_all()
        .await
        .expect("Reactions could not be retrieved");

Both parent paths seem to "start" at the top level of Firestore. Is there any way I can combine them so I can retrieve the "grandchild" collection?

Thanks for your guidance :)

Hyper error: error reading a body from connection: unexpected end of file

2023-01-11T21:42:46.912805Z DEBUG firestore::db::listen_changes: Listen error occurred DatabaseError(FirestoreDatabaseError { public: FirestoreErrorPublicGenericDetails { code: "Unknown" }, details: "Hyper error: error reading a body from connection: unexpected end of file", retry_possible: false }). Restarting in 0ns...

Happens periodically (rougly every minute) when listening to document changes, even when running the example. The workaround is to create a listener with params and a retry delay of 0:

let mut listener = db
        .create_listener_with_params(
            TempFileTokenStorage,
            FirestoreListenerParams::new().with_retry_delay(Duration::from_secs(0)),
        )
        .await?;

This solution however seems a bit hacky and I fear that some document update might be lost.

Support for listing (sub-)collections

I would like to propose the feature to list (sub-)collections (because I want to implement a recursive delete for my testing code).
If you want I could also try to implement it myself and make a PR.

Listen to Collection changes

I was just exploring Firestore with Rust and I found your awesome library!

I'm trying to "listen" to changes to a Firestore Collection, is that possible with this library?

I see that you have listen_doc_changes (which I assume is for listening to a single document?). Could you share an example of how to use it?

read_through_cache does not cache data for list/select by query operations

I wrote the following code to test if it can be cached with read_through_cache and read from cache with read_cached_only.

pub async fn init_db_cache(db: &FirestoreDb, cache_name: &str, collection_path: &str, parent_path: ParentPathBuilder, listener_target: u32, max_capacity: u64) -> Result<FirestoreCache<FirestoreMemoryCacheBackend, FirestoreMemListenStateStorage>, FirestoreError> {
    let conf = FirestoreCacheCollectionConfiguration::new(
        collection_path,
        FirestoreListenerTarget::new(listener_target),
        FirestoreCacheCollectionLoadMode::PreloadNone,
    ).with_parent(parent_path);

    let mut cache = FirestoreCache::new(
        cache_name.into(),
        db,
        FirestoreMemoryCacheBackend::with_max_capacity(
            FirestoreCacheConfiguration::new().add_collection_config(&db, conf),
            max_capacity
        )?,
        FirestoreMemListenStateStorage::new(),
    ).await?;

    cache.load().await?;

    Ok(cache)
}

async fn main() -> Result<()> {
    let db = FirestoreDb::new(&config_env_var("PROJECT_ID")?).await?;
    let parent_path = db.parent_path("p", "c").unwrap();
    let cache = init_db_cache(&db, "mem-cache", "mydoc", parent_path, 1000, 32 * 1024 * 1024);

    let _ = db.read_through_cache(&cache)
        .fluent()
        .list()
        .from("mydoc")
        .parent(&parent_path)
        .order_by([
            (path!(MyDoc::last_update_millis), FirestoreQueryDirection::Descending),
        ])
        .obj()
        .stream_all_with_errors()
        .await?
        .try_collect::<Vec<MyDoc>>()
        .await;

    let list = db.read_cached_only(&cache) // Should be able to get list.
        .fluent()
        .list()
        .from("mydoc")
        .parent(&parent_path)
        .order_by([
            (path!(MyDoc::last_update_millis), FirestoreQueryDirection::Descending),
        ])
        .obj()
        .stream_all_with_errors()
        .await?
        .try_collect::<Vec<MyDoc>>()
        .await;
}

I expected to be able to retrieve the list in the subsequent read_cached_only because it is cached in read_through_cache, but I could not retrieve it.

I also tested PreloadAllDocs with a different code, and it worked fine!

run_transaction does not properly rollback transaction

See example test case below. The test case results in this warning:

running 1 test
2023-12-21T20:01:29.739471Z  INFO firestore::db: Creating a new database client. database_path="projects/kidstrong-at-home/databases/(default)" api_url="https://firestore.googleapis.com" token_scopes="https://www.googleapis.com/auth/cloud-platform"
2023-12-21T20:01:30.444972Z DEBUG Firestore Delete Document{/firestore/collection_name="integration-test-transactions" /firestore/document_name="projects/kidstrong-at-home/databases/(default)/documents/integration-test-transactions/test-1" /firestore/response_time=499}: firestore::db::delete: Deleted a document. collection_id="integration-test-transactions" document_id="test-1"
2023-12-21T20:01:30.622907Z DEBUG Firestore Update Document{/firestore/collection_name="integration-test-transactions" /firestore/document_name="projects/kidstrong-at-home/databases/(default)/documents/integration-test-transactions/test-1" /firestore/response_time=177}: firestore::db::update: Updated the document. collection_id="integration-test-transactions" document_id="projects/kidstrong-at-home/databases/(default)/documents/integration-test-transactions/test-1"
2023-12-21T20:01:30.728739Z DEBUG Firestore Transaction{/firestore/transaction_id="11a6a1fe3465dcbd"}: firestore::db::transaction: Created a new transaction. mode=ReadWrite
2023-12-21T20:01:30.728948Z  WARN Firestore Transaction{/firestore/transaction_id="11a6a1fe3465dcbd"}: firestore::db::transaction: Transaction was neither committed nor rolled back.
Error: ErrorInTransaction(FirestoreErrorInTransaction { transaction_id: [17, 166, 161, 254, 52, 101, 220, 189, 0, 34, 89, 0, 203, 220, 143, 239, 246, 114, 138, 1, 84, 133, 207, 78, 36, 234, 142, 18, 93, 183, 207, 190, 126, 86, 92, 47, 161, 7, 101, 108, 228, 165, 235, 0, 154, 124, 176, 24, 192, 63, 46, 49, 156, 40, 228, 185, 191, 53, 103, 178, 46, 96, 22, 165, 40, 170, 24, 167, 9, 190, 70, 215, 217, 2, 177, 219, 204, 215, 233, 87, 162, 199, 162, 1, 79, 110, 72, 122, 87, 52, 9, 150, 146, 205, 112, 224, 43, 140, 73, 133], source: MyError { details: "test error" } })
test transaction_error_tests ... FAILED

Test code:

#[tokio::test]
async fn transaction_error_tests() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
    let db = setup().await?;

    const TEST_COLLECTION_NAME: &str = "integration-test-transactions";

    let my_struct = MyTestStructure {
        some_id: "test-1".to_string(),
        some_string: "Test".to_string(),
    };

    db.fluent()
        .delete()
        .from(TEST_COLLECTION_NAME)
        .document_id(&my_struct.some_id)
        .execute()
        .await?;

    let object_created: MyTestStructure = db
        .fluent()
        .update()
        .in_col(TEST_COLLECTION_NAME)
        .precondition(FirestoreWritePrecondition::Exists(false))
        .document_id(&my_struct.some_id)
        .object(&my_struct.clone())
        .execute()
        .await?;

    assert_eq!(object_created, my_struct);
    db.run_transaction(|_db, _tx| {
        Box::pin(async move {
            //Test returning an error
            Err(backoff::Error::Permanent(MyError::new("test error")))
        })
    })
    .await?;

    Ok(())
}

#[derive(Debug)]
pub struct MyError {
    details: String,
}

impl MyError {
    fn new(msg: &str) -> MyError {
        MyError {
            details: msg.to_string(),
        }
    }
}

impl std::fmt::Display for MyError {
    fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
        write!(f, "{}", self.details)
    }
}

impl std::error::Error for MyError {
    fn description(&self) -> &str {
        &self.details
    }
}

Certificate error when running on Cloud Run

I have been trying to use this library from a Cloud Run instance, but it looks like whenever the database attempts to initialize I end up with a strange error:

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: SystemError(FirestoreSystemError { public: FirestoreErrorPublicGenericDetails { code: "GrpcStatus(tonic::transport::Error(Transport, hyper::Error(Connect, Custom { kind: InvalidData, error: InvalidCertificateData(\"invalid peer certificate: UnknownIssuer\") })))" }, message: "GCloud system error: Tonic/gRPC error: transport error" })'

Has anyone had an issue like this before where FirestoreDb::new gives an error value? The code works when run locally.

Non-Firebase errors from within `run_transaction`?

run_transaction expects the closure to return Result<T, FirestoreError> but I need a variant that is not Firebase related for my internal logic which could error out.

Something as simple as Custom(Box<dyn std::error::Error>) would be good enough for my needs.

I can write it but wanted to see if there's already a variant that fits this bill and if not, if you were open to the above solution.

Updating nested dynamic fields

Hi,

thanks for your work on the library. I'm new to it, however I'm not new to Firestore. I've been trying to migrate Python codebase to Rust, and I've hit a roadblock when trying to update a document with nested dynamic fields. Here's an example:

Screenshot 2023-08-23 at 23 02 27

I have a document with nested fields. Note that the field 123 could be any number, and is not known at compile time. I want to add a value to the added array within that dynamic nested field.

After going through the examples, this is what I came up with:

database
	.fluent()
	.update()
	.in_col("test")
	.document_id("foo")
	.transforms(|t| {
		t.fields([t
			.field("bar.baz.123.added")
			.append_missing_elements(["987654321"])
	})
	.only_transform()
	.add_to_transaction(&mut transaction)
	.expect("Couldn't add update to transaction");

This compiles, but I'm getting a runtime error, seemingly a rejected API call:

thread 'tokio-runtime-worker' panicked at 'Couldn't commit transaction: DatabaseError(FirestoreDatabaseError { public: FirestoreErrorPublicGenericDetails { code: "InvalidArgument" }, details: "status: InvalidArgument, message: \"Invalid property path \\\"bar.baz.123.added\\\". Unquoted property paths must match regex ([a-zA-Z_][a-zA-Z_0-9]*), and quoted property paths must match regex (`(?:[^`\\\\\\\\]|(?:\\\\\\\\.))+`)\", details: [], metadata: MetadataMap { headers: {\"content-type\": \"application/grpc\", \"date\": \"Wed, 23 Aug 2023 20:27:33 GMT\", \"alt-svc\": \"h3=\\\":443\\\"; ma=2592000,h3-29=\\\":443\\\"; ma=2592000\"} }", retry_possible: false })', src/main.rs:440:10

I've been used to this dot notation for fields from the official libraries, including the one for Python, so I'm none the wiser how that could be "properly" done in Rust, however there seemingly are no examples that would address this.

Any help would be appreciated.

Can't connect to Firestore emulator due to missing token

I am attempting to do a basic setup that connects to a local Firestore emulator, however when I do so the connection fails due to a missing token.

I'm on an M2 laptop running Sonoma.

Here is my setup:

	// load the dotenv
	dotenv::dotenv().ok();
	let project_id = dotenv::var("FIRESTORE_PROJECT_ID").expect("FIRESTORE_PROJECT_ID must be set");

	let mut db_options = FirestoreDbOptions::new(project_id.clone());
	db_options.firebase_api_url = Some(format!("http://{}", dotenv::var("FIRESTORE_EMULATOR_HOST").expect("FIRESTORE_EMULATOR_HOST must be set")));

	println!("Connecting to Firestore with options: {:?}", db_options);

	let db = FirestoreDb::with_options(db_options).await;
	
	let db = match db {
		Ok(db_instance) => {
			println!("Connected to Firestore");
			db_instance
		},
		Err(e) => {
			panic!("Error connecting to Firestore: {:?}", e);
		}
	};
gcloud emulators firestore start --host-port "[::1]:8080"
Executing: /Users/johnbackes/google-cloud-sdk/platform/cloud-firestore-emulator/cloud_firestore_emulator start --host=::1 --port=8080
[firestore] Nov 12, 2023 3:48:13 PM com.google.cloud.datastore.emulator.firestore.websocket.WebSocketServer start
[firestore] INFO: Started WebSocket server on ws://[::1]:53941
[firestore] API endpoint: http://[::1]:8080
[firestore] If you are using a library that supports the FIRESTORE_EMULATOR_HOST environment variable, run:
[firestore]
[firestore]    export FIRESTORE_EMULATOR_HOST=[::1]:8080
[firestore]
[firestore] Dev App Server is now running.
[firestore]
[firestore] Nov 12, 2023 3:49:12 PM io.gapi.emulators.netty.HttpVersionRoutingHandler channelRead
[firestore] INFO: Detected HTTP/2 connection.

^ (Notice that there is a valid connection)

> cargo run
...
Connecting to Firestore with options: FirestoreDbOptions { google_project_id: "project-id", database_id: "(default)", max_retries: 3, firebase_api_url: Some("http://[::1]:8080") }
thread 'main' panicked at 'Error connecting to Firestore: SystemError(FirestoreSystemError { public: FirestoreErrorPublicGenericDetails { code: "TokenSource" }, message: "GCloud system error: token source error: not found token source" })', src/main.rs:108:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Is it necessary to create a fake token to connect to the Firestore emulator? It isn't in the docs.

Read-only transaction leads to "too much contention error"

In my project I have implemented a get or create function which uses a transaction to first select the document and if it exists returns it right away and otherwise creates it.
However in the case of just returning the document the transaction seems to leave some kind of lock on the document because when I try to delete if afterwards I get the following error message:

thread 'main' panicked at src/main.rs:58:10:
called `Result::unwrap()` on an `Err` value: DatabaseError(FirestoreDatabaseError { public: FirestoreErrorPublicGenericDetails { code: "Aborted" }, details: "status: Aborted, message: \"Too much contention on these documents. Please try again.\", details: [], metadata: MetadataMap { headers: {\"content-type\": \"application/grpc\", \"date\": \"Thu, 23 Nov 2023 09:46:09 GMT\", \"alt-svc\": \"h3=\\\":443\\\"; ma=2592000,h3-29=\\\":443\\\"; ma=2592000\"} }", retry_possible: true })
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

I've created a minimal example to showcase the problem (and the error message above is also created by this example):

use firestore::FirestoreDb;
use serde::{Deserialize, Serialize};

#[derive(Serialize, Deserialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct TestStruct {
    pub prop: String,
}

const COLLECTION_NAME: &str = "test_col";

#[tokio::main]
async fn main() {
    let db = FirestoreDb::new("gcp-project-id").await.unwrap();
    let doc_id = "testid";

    db.fluent()
        .insert()
        .into(COLLECTION_NAME)
        .document_id(doc_id)
        .object(&TestStruct {
            prop: "test".to_owned(),
        })
        .execute::<TestStruct>()
        .await
        .unwrap();

    db.run_transaction(|db, _transaction| {
        Box::pin(async move {
            let test: Option<TestStruct> = db
                .fluent()
                .select()
                .by_id_in(COLLECTION_NAME)
                .obj()
                .one(doc_id)
                .await?;

            // uncomment to make the error go away
            // db.fluent()
            //     .update()
            //     .in_col(COLLECTION_NAME)
            //     .document_id(doc_id)
            //     .object(&test)
            //     .add_to_transaction(_transaction)?;

            Ok(test)
        })
    })
    .await
    .unwrap();

    db.fluent()
        .delete()
        .from(COLLECTION_NAME)
        .document_id(doc_id)
        .execute()
        .await
        .unwrap();
}

When a write happens the problem vanishes.

Thank you for your consideration.

Get IDs of documents in a list

Hey @abdolence ,

I have the following function which gets all entries in the collection users:

pub async fn get_all_user_entries(firestore: &FirestoreDb) -> Result<Vec<FirestoreUser>, String> {
    let user_stream: BoxStream<FirestoreUser> = firestore
        .fluent()
        .list()
        .from("users")
        .obj()
        .stream_all()
        .await
        .expect("Failed to get users");

    let as_vec: Vec<FirestoreUser> = user_stream.collect().await;

    Ok(as_vec)
}

I'd like to also include the UID of every user in the FirestoreUser object. In my Firestore implementation the UID is the document name of every user document in the users collection. Is there a method I could use to get the object IDs when I use list()?

Thank you for your assistance.

Compilation error after upgrading to `0.10.1`

Got the following error after upgrading to 0.10.1. Last good version is 0.9.2. I cannot compile from source as well.

error[E0759]: `self` has an anonymous lifetime `'_` but it needs to satisfy a `'static` lifetime requirement
  --> /Users/__/.cargo/registry/src/github.com-1ecc6299db9ec823/firestore-0.10.1/src/db/list_doc.rs:49:9
   |
49 |         &self,
   |         ^^^^^ this data with an anonymous lifetime `'_`...
...
84 |         Ok(stream)
   |         ---------- ...is used and required to live as long as `'static` here
   |
note: `'static` lifetime requirement introduced by the return type
  --> /Users/__/.cargo/registry/src/github.com-1ecc6299db9ec823/firestore-0.10.1/src/db/list_doc.rs:51:10
   |
51 |     ) -> FirestoreResult<BoxStream<Document>> {
   |          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ requirement introduced by this return type
...
84 |         Ok(stream)
   |         ---------- because of this returned expression

error[E0759]: `self` has an anonymous lifetime `'_` but it needs to satisfy a `'static` lifetime requirement
   --> /Users/__/.cargo/registry/src/github.com-1ecc6299db9ec823/firestore-0.10.1/src/db/transaction.rs:176:9
    |
176 |         &self,
    |         ^^^^^ this data with an anonymous lifetime `'_`...
...
179 |         FirestoreTransaction::new(self, options).await
    |         ---------------------------------------------- ...is used and required to live as long as `'static` here
    ```

Doc Comments

Most of the libraries functions are not annotated using doc comments. It would be really helpful for development in many cases. I have started to annotate some, but its somewhat difficult because of my lacking experience with the library.

FirestoreListenerTarget type with a negative i32 value will cause the gRPC query to silently fail and restart.

I would suggest simply using a u32 for the interior FirestoreListenerTarget type instead.

Here's the line I would propose the change:

pub struct FirestoreListenerTarget(i32);

Here's a copy of the error trace:

2023-03-10T22:11:26.146569Z DEBUG firestore::db::listen_changes: Listen error occurred DatabaseError(FirestoreDatabaseError { public: FirestoreErrorPublicGenericDetails { code: InvalidArgument }, details:
 status: InvalidArgument, message: "Invalid target ID: -842241870", details: [], metadata: MetadataMap { headers: {} }, retry_possible: false }). Restarting in 5s...

I can also confirm that when the int is positive the fluent listener API works flawlessly.

Let me know if you need more info?

Firestore listen changes never reacts to changes except TargetChange

I've been trying to use this firestore library. However, when I try to listen to it, I just get a lot of "TargetChange" events and no other events. Even when I manually delete the document from the collection I am listening to, it does not react at all (no prints or logs).

I am also using the same code as in the example listen_changes

Anyone else with better luck than me?

Firebase token

Hi, how can I find Firebase token using gcloud auth login?
Error connecting to Firestore: SystemError(FirestoreSystemError { public: FirestoreErrorPublicGenericDetails { code: "TokenSource" }, message: "GCloud system error: token source error: not found token source" })

Service Not Ready - Transport Error When Fetching Data

I am encountering an issue when trying to fetch data. The issue occurs, no matter what operation I try to perform. Even simple operations like listing all documents result in the same error.

Code

pub async fn get_one(db: &FirestoreDb, team: &str, id: &str) -> Option<Snippet> {
    db.fluent()
        .select()
        .by_id_in(team)
        .obj()
        .one(id)
        .await
        .unwrap()
}

Error

DatabaseError(FirestoreDatabaseError { public: FirestoreErrorPublicGenericDetails { code: "Unknown" }, details: "status: Unknown, message: \"Service was not ready: transport error\", details: [], metadata: MetadataMap { headers: {} }", retry_possible: false })

All versions > 0.37.5 is broken on build

error[E0063]: missing field find_nearest in initializer of gcloud_sdk::google::firestore::v1::StructuredQuery
--> /home/vitor/.asdf/installs/rust/1.77.1/registry/src/index.crates.io-6f17d22bba15001f/firestore-0.38.0/src/db/query_models.rs:46:9
|
46 | StructuredQuery {
| ^^^^^^^^^^^^^^^ missing find_nearest

error[E0063]: missing field explain_options in initializer of gcloud_sdk::google::firestore::v1::RunQueryRequest
--> /home/vitor/.asdf/installs/rust/1.77.1/registry/src/index.crates.io-6f17d22bba15001f/firestore-0.38.0/src/db/query.rs:75:44
|
75 | Ok(gcloud_sdk::tonic::Request::new(RunQueryRequest {
| ^^^^^^^^^^^^^^^ missing explain_options

error[E0063]: missing field explain_options in initializer of gcloud_sdk::google::firestore::v1::RunAggregationQueryRequest
--> /home/vitor/.asdf/installs/rust/1.77.1/registry/src/index.crates.io-6f17d22bba15001f/firestore-0.38.0/src/db/aggregated_query.rs:276:44
|
276 | Ok(gcloud_sdk::tonic::Request::new(RunAggregationQueryRequest {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^ missing explain_options

Nested Collections

How to operate with collections inside docs?
I want to create something like. coll1/doc1/coll2/doc2

Reads in transactions

This might be a silly question but I noticed in your run_transaction example that the select wasn't explicitly added to the transaction.
Is it automatically part of the transaction? Can I be sure that the document(s) selected aren't updated by other queries/transactions while a transaction is working on them?

Using the same struct with DateTime for Firestore<->Backend<->Frontend communication

Is there a way to use the same struct for both Backend<->Frontend and Backend<->Firestore communication?
For the Backend<->Firestore we need DateTime to FirestoreTimestamp conversion.
For the Backend<->Frontend we need u64 to DateTime conversion.

This is my current way of solving this:

#[derive(Serialize, Deserialize, Default)]
#[serde(rename_all = "camelCase")]
pub struct FooFirestore {
    pub some_time: Option<FirestoreTimestamp>,
}

#[derive(Serialize, Deserialize, Default, TypeScriptify)]
#[serde(rename_all = "camelCase")]
pub struct FooJson {
    #[serde(with = "chrono::serde::ts_seconds_option")]
    #[ts(ts_as = "Option<i64>")]
    pub some_time: Option<DateTime<Utc>>,
}

impl From<FooFirestore> for FooJson {
    fn from(value: FooFirestore) -> Self {
        Self {
            some_time: value.some_time.map(|t| t.0),
        }
    }
}

impl From<FooJson> for FooFirestore {
    fn from(value: FooJson) -> Self {
        Self {
            some_time: value.some_time.map(|t| t.into()),
        }
    }
}

This is a bit verbose, especially if I have more properties. Every property is quadrupled in code.

Email/Password auth

Hello!

I'm trying to interact with a service that uses Firebase for both auth and storage, more specifically it's using the signInWithEmailAndPassword(email, password) method and I've got it working in a small js script.
However, I don't see any email/password auth methods in firestore-rs or gcloud-sdk-rs, it's seemingly all token based.

Is there a way for me to get a token from the sign-in process somehow or did I miss something?

Thanks in advance!

batch_stream_get_objects_by_ids result doesn't communicate a deserialization error

If one of the results deserialized by batch_stream_get_objects_by_ids produces a deserialization error it is only logged but the Result of the function as a whole indicates success anyways.

It's probably debatable if this an issue or not but it caught me by surprise since other functions return the deserialization error in their result.

How to get `id` of document when using `.select()`?

I have the following query and it's working flawlessly:

let challenges = db.fluent()
       .select()
       .from("challenges")
       .filter(|q| {
           let now = firestore::FirestoreTimestamp(Utc::now());

           return q.for_all([
               q.field("status").equal("unknown"),
               // Filter out all challenges in the future
               q.field("startDate").less_than(now),
           ]);
        })
        .obj::<FirChallenge>()
        .stream_query()
        .await;

However, how can I retrieve the id? I need this because I want to update the challenge status after I'm done processing the document. The program should be a cron job that runs regularly through the collection.

Mismatched types

Trying to use firebase in a project and get following error:

error[E0308]: mismatched types
    --> /home/shinmen/.cargo/registry/src/github.com-1ecc6299db9ec823/firestore-0.11.0/src/db/transaction.rs:113:62
     |
113  |         transaction_span.record("/firestore/transaction_id", hex_trans_id);
     |                          ------                              ^^^^^^^^^^^^
     |                          |                                   |
     |                          |                                   expected reference, found struct `std::string::String`
     |                          |                                   help: consider borrowing here: `&hex_trans_id`
     |                          arguments to this function are incorrect
     |
     = note: expected reference `&_`
                   found struct `std::string::String`
note: associated function defined here
    --> /home/shinmen/.cargo/registry/src/github.com-1ecc6299db9ec823/tracing-0.1.35/src/span.rs:1194:12
     |
1194 |     pub fn record<Q: ?Sized, V>(&self, field: &Q, value: &V) -> &Self
     |            ^^^^^^

error[E0277]: the size for values of type `str` cannot be known at compilation time
    --> /home/shinmen/.cargo/registry/src/github.com-1ecc6299db9ec823/firestore-0.11.0/src/db/transaction.rs:153:35
     |
153  |             self.transaction_span.record(
     |                                   ^^^^^^ doesn't have a size known at compile-time
     |
     = help: the trait `Sized` is not implemented for `str`
note: required by a bound in `tracing::Span::record`
    --> /home/shinmen/.cargo/registry/src/github.com-1ecc6299db9ec823/tracing-0.1.35/src/span.rs:1194:30
     |
1194 |     pub fn record<Q: ?Sized, V>(&self, field: &Q, value: &V) -> &Self
     |                              ^ required by this bound in `tracing::Span::record`

Some errors have detailed explanations: E0277, E0308.
For more information about an error, try `rustc --explain E0277`.
error: could not compile `firestore` due to 2 previous errors

Adding document with single parent collection

I want to insert a document a with the path "parent_1/customers/new_document_id/", but using db.parent_path method, I only seem to be able to insert it with this path "parent_1/parent_1/customers/new_document_id/".

Here's how I'm inserting it

let parent_path = db.parent_path("parent_1", "parent_1").unwrap();
let result = db
  .fluent()
  .insert()
  .into("customers")
  .document_id(&customer.email)
  .parent(&parent_path)
  .object(&customer)
  .execute()
  .await
  .unwrap();

Add environment variable to redirect the firestore client to a local emulator

For developing against a local firestore emulator I want to overwrite the Google API URL for the firestore client.

I tried this in my fork here: master...xifel:firestore-rs:google_url_env and works for me.
However I'm very new to Rust and not sure if this is the best way to go about it since it's adding lazy_static as a new dependency.

The name of the environment variable is also used by the official Firestore Admin SDK https://firebase.google.com/docs/emulator-suite/connect_firestore#web-version-9 and can be used the same way.

If you would like to merge that change as is or with modifications I would gladly create the pull request. Otherwise I just would like to propose the feature.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.