Giter Club home page Giter Club logo

chiselstore's Introduction

banner


Build Status License Discord Twitter

What is ChiselStrike?

ChiselStrike is a complete backend bundled in one piece. Your one stop-shop for all your backend needs, powered by TypeScript.

Why ChiselStrike?

Putting together a backend is hard work. Databases? ORM? Business logic? Data access policies? And how to offer all of that through an API?

Learning all that, plus figuring out the interactions between all the components can be a drain on an application developer's time. Low-code based approaches allow for super fast prototypes, but as you need to scale and evolve, the time you saved on the prototype is now owed with interest in refactorings, migrations, etc.

ChiselStrike provides everything you need to handle and evolve your backend, from the data layer to the business logic, allowing you to focus on what you care about โ€“ย your code, rather than worrying about databases schemas, migrations, or even database operations.

All driven by TypeScript, so your backend can evolve as your code evolves.

How does that work?

ChiselStrike keeps things as close as possible to pure TypeScript, and a translation layer takes care of index creation, database query generation, and even communicating with external systems like Kafka.

Internally, ChiselStrike uses a SQLite database so there's no need to set up any external data layer (although it is possible to hook up an external Postgres-compatible database). ChiselStrike also abstract other concepts common to complex backends, like Kafka-compatible streaming platforms.

Quick start

To get a CRUD API working in 30 seconds or less, first create a new project:

npx -y create-chiselstrike-app@latest my-app
cd my-app

Add a model by writing the following TypeScript code to models/BlogComment.ts:

import { ChiselEntity } from "@chiselstrike/api"

export class BlogComment extends ChiselEntity {
    content: string = "";
    by: string = "";
}

Add a route by writing the following TypeScript code to routes/comments.ts:

import { BlogComment } from "../models/BlogComment";
export default BlogComment.crud();

Start the development server with:

npm run dev

This server will provide a CRUD API that you can use to add and query instances of the BlogComment entity.

curl -X POST -d '{"content": "First comment", "by": "Jill"}' localhost:8080/dev/comments

curl localhost:8080/dev/comments

For a more detailed tutorial about how to get started with ChiselStrike, follow our Getting started tutorial.

Is ChiselStrike a database?

No. The founding team at ChiselStrike have written databases from scratch before and we believe there are better things to do in life, like pretty much anything else. ChiselStrike comes bundled with SQLite, providing developers with a zero-conf relational-like abstraction that allows one to think of backends from the business needs down, instead of from the database up.

Instead, you can think of ChiselStrike as a big pool of global shared memory. The data access API is an integral part of ChiselStrike and offers developers a way to just code, without worrying about the underlying database (anymore than you worry about what happens in each level of the memory hierarchy, meaning some people do, but most people don't have to!).

In production, ChiselStrike can also hook up into a Kafka-compatible streaming platform when available, and transparently drive both that and the database from a unified TypeScript/JavaScript abstraction.

Is ChiselStrike an ORM?

Kind of. ChiselStrike has some aspects that overlap with traditional ORMs, in that it allows you to access database abstractions in your programming language. However, in traditional ORMs you start from the database, and export it up. Changes are done to the database schema, which is then bubbled up through migrations, and elements of the database invariably leak to the API.

ChiselStrike, on the other hand, starts from your code and automates the decisions needed to implement that into the database, much like what a compiler would do.

Let's look at ChiselStrike's documentation for an example of what's needed to create a comment on a blog post:

import { ChiselEntity } from "@chiselstrike/api"

export class BlogComment extends ChiselEntity {
    content: string = "";
    by: string = "";
}

The first thing you will notice is that there is no need to specify how those things map to the underlying database. No tracking of primary keys, column types, etc.

Now imagine you need to start tracking whether this was created by a human or a bot. You can change your model to say:

import { ChiselEntity } from "@chiselstrike/api"

export class BlogComment extends ChiselEntity {
    content: string = "";
    by: string = "";
    isHuman: boolean = false;
}

and that's it! There are no migrations and no need to alter a table.

Furthermore, if you need to find all blog posts written by humans, you can just write a lambda instead of trying to craft a database query in TypeScript:

const all = await BlogComment.findMany(p => p.isHuman);

Is ChiselStrike a TypeScript runtime?

ChiselStrike includes a TypeScript runtime - the fantastic and beloved Deno. That's the last piece of the puzzle with the data API and the database bundles. That allows you to develop everything locally from your laptop and integrate with your favorite frontend framework. Be it Next.js, Gatsby, Remix, or any others - we're cool with all of them!

That's all fine and all, but I need more than that!

We hear you. No modern application is complete without authentication and security. ChiselStrike integrates with next-auth and allows you to specify authentication entities directly from your TypeScript models.

You can then add a policy file that details which fields can be accessed, and which endpoints are available.

For example, you can store the blog authors as part of the models,

import { ChiselEntity, AuthUser } from "@chiselstrike/api"

export class BlogComment extends ChiselEntity {
    content: string = "";
    @labels("protect") author: AuthUser;
}

and then write a policy saying that the users should only be able to see the posts that they themselves originated:

labels:
  - name: protect
    transform: match_login

Now your security policies are declaratively applied separately from the code, and you can easily grasp what's going on.

In Summary

ChiselStrike provides everything you need to handle your backend, from the data layer to the business logic, wrapped in powerful abstractions that let you just code and not worry about handling databases schemas, migrations, and operations again.

It allows you to declaratively specify compliance policies around who can access the data and under which circumstances.

Your ChiselStrike files can go into their own repo, or even better, into a subdirectory of your existing frontend repo. You can code your presentation and data layer together, and turn any frontend framework into a full-stack (including the database layer!) framework in minutes.

Contributing

To build and develop from source:

git submodule update --init --recursive
cargo build

That will build the chiseld server and chisel utility.

You can now use create-chiselstrike-app to install a local version of the API:

node ./packages/create-chiselstrike-app --chisel-version="file:../packages/chiselstrike-api" my-backend

And then replace instances of npm run with direct calls to the new binaries. For example, instead of npm run dev, run

cd my-backend
npm i esbuild
../target/debug/chisel dev

Also, consider:

Open (or fix!) an issue ๐Ÿ™‡โ€โ™‚๏ธ

Join our discord community ๐Ÿคฉ

Start a discussion ๐Ÿ™‹โ€โ™€๏ธ

Next steps?

Our documentation (including a quick tutorial) is available at here

chiselstore's People

Contributors

kilerd avatar penberg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chiselstore's Issues

Optimize reads

Currently, ChiselStore performs reads using the Raft consensus protocol -- just like writes -- which provides strong consistency. However, strong consistency also makes reads pretty expensive. One way to optimize reads is to provide a "local read" option, which performs reads on any node. The "local read" option relaxes the consistency model by allowing stale reads in exchange for higher performance.

Connection pool for RPC

Currently, RPCs establish a new connection for every invocation. Let's use a connection pool to make this faster.

Read-your-writes consistency guarantee for relaxed reads

With relaxed reads, there's currently no guarantee on read-your-write consistency. This is because a write will be acknowledged when the write is applied to the state machine of the leader, but not on the local replica.

Example:

  • A follower node F receives a write request, which is delegated to node L, which is the leader.
  • The write is replicated to the logs of all nodes (but not necessarily applied).
  • The write is applied to the state machine of L.
  • L acknowledges the write to F and F acknowledges the write to the client.
  • A relaxed read request arrives on node F, which does not yet have the write applied to its state machine, violating read-your-writes consistency.

Use in-memory SQLite

We currently use file-backed SQLite databases because the basic in-memory SQLite does not seem to support concurrent reads:

75ec0c7

Let's look at ways to turn on in-memory again. For example, the "memdb" VFS should support concurrent reads.

Executing non-deterministic SQL functions

Non-deterministic SQL function such as date() and random() must be evaluated only once. We can do this by evaluating the functions only on leader and replicate the evaluated values to followers.

Can't build the project or even run the example

cargo build
    Updating crates.io index
    Updating git repository `https://github.com/chiselstrike/little-raft.git`
   Compiling chiselstore v0.1.0 (/Users/cscetbon/src/git/chiselstore/core)
error: failed to run custom build command for `chiselstore v0.1.0 (/Users/cscetbon/src/git/chiselstore/core)`

Caused by:
  process didn't exit successfully: `/Users/cscetbon/src/git/chiselstore/target/debug/build/chiselstore-8207c42d14ef8b10/build-script-build` (exit status: 1)
  --- stderr
  error running rustfmt: Os { code: 2, kind: NotFound, message: "No such file or directory" }

Example server fails to start

When attempting to start the example server, I trip over:

penberg@vonneumann chiselstore % cargo run --example gouged -- --id 1 --peers 2 3
    Updating crates.io index
   Compiling event-listener v2.5.3
   Compiling chiselstore v0.1.0 (/Users/penberg/src/chiselstrike/chiselstore/core)
   Compiling async-mutex v1.4.0
    Finished dev [unoptimized + debuginfo] target(s) in 6.63s
     Running `target/debug/examples/gouged --id 1 --peers 2 3`
RPC listening to 127.0.0.1:50001 ...
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: tonic::transport::Error(Transport, hyper::Error(Connect, ConnectError("tcp connect error", Os { code: 61, kind: ConnectionRefused, message: "Connection refused" })))', core/src/rpc.rs:55:52
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: tonic::transport::Error(Transport, hyper::Error(Connect, ConnectError("tcp connect error", Os { code: 61, kind: ConnectionRefused, message: "Connection refused" })))', core/src/rpc.rs:55:52

I bisected the problem to 75ec0c7

Enforce exactly-once semantics

If a leader crashes after applying a command to the state machine, but before responding to a client, the client will re-try the command, which breaks exactly-once semantics. One way to fix this is to make the client include an ID for every command. A leader can then check for the ID in its log before applying a command to its state machine to detect stale commands.

Snapshot support

Currently, the replicated Raft log grows unbounded. First, we need to add Raft snapshot support to Little Raft, which allows truncating the log. We then need to add snapshot support to ChiselStore with, for example, SQLite online backups: https://www.sqlite.org/backup.html. IOW, normal reads and writes could go to an in-memory database, and at snapshot time, an on-disk backup is created. When a node is restarted, the on-disk backup could be read to an in-memory database.

Improve clippy lints

@glommer suggests:

Pekka, please add a variation of #![warn(missing_docs, missing_debug_implementations, rust_2018_idioms)] to the top-level lib file so we make sure that all public interfaces have Debug, documentation, etc (likely you will want to enforce 2021 idioms)

I/O errors should crash the process

If we have an I/O error, we must not resume completion, but instead crash the process so that node with partial results does not continue in the same Raft cluster.

SQLite transaction dead-lock

We currently let SQL transaction commands to replicate, which can result in a dead-lock.

For example, if a writer issues a BEGIN TRANSACTION command, but dies after it has replicated, the cluster is in a situation where all nodes have an active transaction, but there's now writer to commit or roll back.

Make node IP addresses configurable for `gouged`

Currently gouged assumes that all nodes are running on local machine and uses a mapping between nodes and ports. Make the peer IP addresses configurable to make testing with multiple machines easier.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.