Giter Club home page Giter Club logo

roadmap's Introduction

OpenFGA

Go Reference GitHub release (latest SemVer) Docker Pulls Codecov Go Report CII Best Practices Join our community Twitter FOSSA Status Artifact HUB OpenSSF Scorecard SLSA 3

A high-performance and flexible authorization/permission engine built for developers and inspired by Google Zanzibar.

OpenFGA is designed to make it easy for developers to model their application permissions and add and integrate fine-grained authorization into their applications.

It allows in-memory data storage for quick development, as well as pluggable database modules. It currently supports PostgreSQL 14 and MySQL 8.

It offers an HTTP API and a gRPC API. It has SDKs for Java, Node.js/JavaScript, GoLang, Python and .NET. Look in our Community section for third-party SDKs and tools. It can also be used as a library (see example).

Getting Started

The following section aims to help you get started quickly. Please look at our official documentation for in-depth information.

Setup and Installation

ℹ️ The following sections setup an OpenFGA server using the default configuration values. These are for rapid development and not for a production environment. Data written to an OpenFGA instance using the default configuration with the memory storage engine will not persist after the service is stopped.

For more information on how to configure the OpenFGA server, please take a look at our official documentation on Running in Production.

Docker

OpenFGA is available on Dockerhub, so you can quickly start it using the in-memory datastore by running the following commands:

docker pull openfga/openfga
docker run -p 8080:8080 -p 3000:3000 openfga/openfga run

Tip

The OPENFGA_HTTP_ADDR environment variable can used to configure the address at which the playground expects the OpenFGA server to be. For example, docker run -e OPENFGA_PLAYGROUND_ENABLED=true -e OPENFGA_HTTP_ADDR=0.0.0.0:4000 -p 4000:4000 -p 3000:3000 openfga/openfga run will start the OpenFGA server on port 4000, and configure the playground too.

Docker Compose

docker-compose.yaml provides an example of how to launch OpenFGA with Postgres using docker compose.

  1. First, either clone this repo or curl the docker-compose.yaml file with the following command:

    curl -LO https://openfga.dev/docker-compose.yaml
  2. Then, run the following command:

    docker compose up

Package Managers

If you are a Homebrew user, you can install OpenFGA with the following command:

brew install openfga

Pre-compiled Binaries

Download your platform's latest release and extract it. Then run the binary with the command:

./openfga run

Building from Source

There are two recommended options for building OpenFGA from source code:

Building from source with go install

Make sure you have Go 1.20 or later installed. See the Go downloads page.

You can install from source using Go modules:

  1. First, make sure $GOBIN is on your shell $PATH:

    export PATH=$PATH:$(go env GOBIN)
  2. Then use the install command:

    go install github.com/openfga/openfga/cmd/openfga
  3. Run the server with:

    ./openfga run

Building from source with go build

Alternatively you can build OpenFGA by cloning the project from this Github repo, and then building it with the go build command:

  1. Clone the repo to a local directory, and navigate to that directory:

    git clone https://github.com/openfga/openfga.git && cd openfga
  2. Then use the build command:

    go build -o ./openfga ./cmd/openfga
  3. Run the server with:

    ./openfga run

Verifying the Installation

Now that you have Set up and Installed OpenFGA, you can test your installation by creating an OpenFGA Store.

curl -X POST 'localhost:8080/stores' \
--header 'Content-Type: application/json' \
--data-raw '{
    "name": "openfga-demo"
}'

If everything is running correctly, you should get a response with information about the newly created store, for example:

{
  "id": "01G3EMTKQRKJ93PFVDA1SJHWD2",
  "name": "openfga-demo",
  "created_at": "2022-05-19T17:11:12.888680Z",
  "updated_at": "2022-05-19T17:11:12.888680Z"
}

Playground

The Playground facilitates rapid development by allowing you to visualize and model your application's authorization model(s) and manage relationship tuples with a locally running OpenFGA instance.

To run OpenFGA with the Playground disabled, provide the --playground-enabled=false flag.

./openfga run --playground-enabled=false

Once OpenFGA is running, by default, the Playground can be accessed at http://localhost:3000/playground.

In the event that a port other than the default port is required, the --playground-port flag can be set to change it. For example,

./openfga run --playground-enabled --playground-port 3001

Profiler (pprof)

Profiling through pprof can be enabled on the OpenFGA server by providing the --profiler-enabled flag.

./openfga run --profiler-enabled

This will start serving profiling data on port 3001. You can see that data by visiting http://localhost:3001/debug/pprof.

If you need to serve the profiler on a different address, you can do so by specifying the --profiler-addr flag. For example,

./openfga run --profiler-enabled --profiler-addr :3002

Once the OpenFGA server is running, in another window you can run the following command to generate a compressed CPU profile:

go tool pprof -proto -seconds 60 http://localhost:3001/debug/pprof/profile
# will collect data for 60 seconds and generate a file like pprof.samples.cpu.001.pb.gz

That file can be analyzed visually by running the following command and then visiting http://localhost:8084:

go tool pprof -http=localhost:8084 pprof.samples.cpu.001.pb.gz

Next Steps

Take a look at examples of how to:

Don't hesitate to browse the official Documentation, API Reference.

Limitations

MySQL Storage engine

The MySQL storage engine has a lower length limit for some properties of a tuple compared with other storage backends. For more information see the docs.

OpenFGA's MySQL Storage Adapter was contributed to OpenFGA by @twintag. Thanks!

Production Readiness

The core OpenFGA service has been in use by Okta FGA in production since December 2021.

OpenFGA's Memory Storage Adapter was built for development purposes only and is not recommended for a production environment, because it is not designed for scalable queries and has no support for persistence.

You can learn about more organizations using OpenFGA in production here. If your organization is using OpenFGA in production please consider adding it to the list.

The OpenFGA team will do its best to address all production issues with high priority.

Contributing

See CONTRIBUTING.

Community Meetings

We hold a monthly meeting to interact with the community, collaborate and receive/provide feedback. You can find more details, including the time, our agenda, and the meeting minutes here.

roadmap's People

Contributors

aaguiarz avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar

roadmap's Issues

Batch Checks in OpenFGA Server

We currently support issuing checks in batch in the OpenFGA SDKs. This works by issuing checks in parallel, and we believe it's a good approach in general.

When you deploy OpenFGA with multiple nodes, the load can be spread across nodes, and the total response time would be close to the response time of the most expensive check call.

However, when the number of checks is too large, it could be useful to resolve all the checks in the server with a single roundtrip.

Reference Application

We want to build a reference application that can be used to explain/demo OpenFGA.

Support additional ABAC scenarios

OpenFGA provides support for some basic ABAC scenarios with contextual tuples.

We want to explore additional options, that could be:

  • Specifying conditions when writing a tuple (e.g. this tuple is only valid if condition X)
  • Support storing/referring to resource attributes (e.g. write a resource attribute like "feature:X status = 'in-development'" and refer to it in the model)
  • Support conditional functions in the DSL that can be called from the model
  • Grant access for a specific period of time, e.g. 10 minutes after the permission was granted.
  • Allow implementing entitlements scenarios that restrict based on usage data (e.g. max number of collaborators)
  • IP-based filtering, where the IP ranges can be coded in the DSL

Related conversations:
https://github.com/orgs/openfga/discussions/114

Early POC was demoed here https://youtu.be/sFqk42fJy_E?t=667

Simple Caching Implementation

Implement a per-node sub-problem cache.

It should cache 'Check' results in-memory. This will speed up check queries that traverse the same paths more than once and check queries that are frequently used.

Batch Check Requests in SDKs

There a different scenarios where developers could want to make multiple calls to the check API.

OpenFGA does not provide a batch check API, but we want to simplify the developer experience by providing a way to make multiple Check calls in parallel from the SDKs.

We'll eventually add it to the backend after we understand value and usage patterns.

ListRelations support in SDKs

Developers want to have a simple way to know which relations a user has with a specific object.

To build the UI below:

image

developers could use the Read endpoint if all of those relations are direct relations. However, if some of them are indirect relations, they'll need to call Check for each one.

To simplify this use case, we can provide an API like

const response = await fgaClient.listRelations({
  users: "user:123"
  object: "document:pricing"
  relations: ["can_view", "can_edit"]
});

// returns response.relations : [ "can_view"]

This API can be implemented on the SDKs, by calling multiple checks in Batch, or in the server. We'll start by implementing it in the client to better understand its value/usage, and eventually implement it in the server later.

Improve latency in complex models

OpenFGA's is currently optimized to offer low latency for models with up to 2 nested levels.

In order to provide that latency for models with more nested levels we need to implement a Zanzibar feature called “Leopard indexes”.

The following is the definition in Google's Zanzibar paper:

Leopard is an indexing system used to optimize operations on large and deeply nested sets. It reads periodic snapshots of ACL data and watches for changes between snapshots. It performs transformations on that data, such as denormalization, and responds to requests from aclservers.

Async Bulk Import

OpenFGA currently enables importing tuples from a JSON/YAML/CSV file with the CLI.

There's an opportunity to build a more reliable importing process that would:

  • Report progress
  • Retry errors
  • Provide a final report with # of tuples imported, tuples with error

ListUsers

In the same way we provide a ListObjects endpoint that list all resources for a specific user & relation, We should provide a way to list which users have a specific relationship with a specific resource, for example:

const response = await fgaClient.listUsers({
object: "document:pricing",
relation: "reader",
type: "document"
});

// returns response.users : ["user:jon", "user:maria"]

There's a PR for the RFC here.

GraphQL Integration

We want to provide tooling/guidance on how to use OpenFGA with GraphQL.

There's a relevant conversation here

Distributed Caching implementation

OpenFGA has a per-node cache (#38).

It is possible to improve performance by taking advantage of a distributed caching or an external cache service.

Additional OpenFGA API Authorization Options

OpenFGA currently supports pre-shared keys and ODIC for authenticating calls to the APIs. Those credentials are global, and allow performing any action in any store.

We want to provide more granularity for authorizing calls to the OpenFGA API. Some scenarios:

  • Different credentials for each FGA store.
  • Different credentials with different permissions per FGA store (e.g. some credentials can perform writes while others cannot).
  • Different credentials with different permissions for different types in the FGA store (e.g. some credentials allow writing tuples for documents and others allow writing tuples for users, or define those permissions per module)

This RFC discusses different alternatives in more depth openfga/rfcs#10

Simplify creating local indexes

To address Search with Permissions scenarios, developers might need to create a local index for specific user-type, object, relations.

For example, if they have a model like:

model 
  schema 1.1
type user
type document
   relations
     define owner : [user]
     define can_view : [user] or owner

You could want to have a document_viewers table with a user_id, document_id schema, that has a record for all the documents a user can view.

We want to simplify how to let OpenFGA users create such a table.

Async Bulk Delete API

Tuples can only be deleted individually.

If you want to delete a set of tuples, you need to first find them (by using the read API, or by construct them from data in your databases), and then call the write API with a set of tuples to delete.

We should provide a way to batch delete tuples, based on different conditions. Conditions should be consistent with the ones we have in the Read API.

This would be an async API, where you'd be able to query for status or get notified after it finishes.

Open questions:

  • How should be the async behavior be modeled? Callback Webhook? Polling? Streaming callback?
  • What should the API return? Number of tuples deleted + tuples with errors? All deleted tuples?
  • Should we have a filter called 'Invalid Tuples' so I can delete all invalid tuples?

Modular Authorization Models

The authorization policies for a specific application need to be maintained by the application team. In organizations with multiple teams, it would be optimal to let each team maintain their own authorization model.

OpenFGA currently supports a single model per store, and we don't plan to change that, but we want to provide a way to enable each team maintain their models.

A possible solution would be to split the OpenFGA in multiple 'modules':

base.fga

module base
 
type user

type role
  relations
    define member : [user]

document_management.fga

module document_management
include "base.fga"

type folder
    define viewer : [user, group#member]

The files would need to be combined before saving them to the OpenFGA store, e.g.

fga model compose base.fga document_management.fga --target model.fga
fga model write --file model.fga

Report Code Coverage in Model Tests

We can display the relations that were not checked directly in tests, directly or indirectly, and report if there were tests for which the relation returned allowed = true, and allowed = false, as tests should cover both.

For example, for the tests below:

model: |
  model 
    schema 1.1

  type user
  type document
    relations
      define viewer : [user]
      define writer : [user]
      define can_view : viewer or writer
      define can_edit: writer

tuples:
  - user: user:anne
    relation: viewer
    object: document:1

tests:
    - check:
        - user : user:anne
          object : document:1
          assertions:
            can_view : true

We could report that the relation can_edit and writer were not covered at all, and the viewer, can_view were only tested for positive results.

Allow empty types in authorization model DSL

As a user authoring an authorization model, I want to be able to freely define types and later define its relationships.

Right now we require types to have at least one relation, which is enforced at the API-level.

We could enable types without relationship, which would simplify prototyping, e.g. when asking users to start working on their model, they can start listing the types, saving and then go through them as they update the relations.

We'll need to:

  • Update the validation logic to allow a single type in the API JSON
  • Update our DSL to allow empty types

Allow comments in authorization model DSL

As a developer writing a model, I'd want to add comments to better explain it.

The OpenFGA DSL and the Store API do not support comments.

To address this issue, we need to:

  1. Update the API JSON to add a field for comments for relations and types
  2. Update the DSL and the parser to translate comments typed and send them to the API and interpreting them back

We should also update the documentation to add descriptive comments for complex models.

More flexibility for Read API

The current implementation of the Read API lets you filter read tuples in a few ways:

  • With no filters (read all tuples)
  • By object-id (e.g. tuples for document:document-1)
  • By object-id and relation (e.g. tuples for 'document:-1' with the 'read' relationship)
  • By user id and object-type (e.g. tuples for all documents for user-1)
  • By user and object (e.g. tuples for user-1 and document-1
  • By user, object, relation (e.g. tuples for user-1 document-1 with read relationship)

However, there are several scenarios where you need more flexibility:

  • By object-type (e.g. when deleting a type from the model, I want to delete all tuples for a specific type)
  • By object-type and relation (e.g. when deleting a relation from the model, I want to delete all tuples for a specific type)
  • By user (e.g. when deleting a user, you want to delete all tuples for a user)
  • Filters based on the object_id, e.g. if objects have a structure like permission:{org}/read, I want to filter by org.

Open Policy Agent Integration

As a developer that is leveraging Open Policy Agent I want to use OpenFGA for fine grained authorization.

When using OPA, the data that the policy needs to use is provided by whoever is enforcing the policy. For example, if the policy says that it an only be used by members of a specific group, the group information need to provided to the policy. It can be obtained from an identity token, or by making a call to a service/database to obtain it.

We want to allow OPA users to easily call the OpenFGA ‘check’ API as part of the policy decision. In that way, OpenFGA can be utilized as another Policy Information Point along other pieces of information. This allows existing OPA users to leverage OpenFGA.

To achieve that we need to integrate OpenFGA with “Rego”, the language used to define policies in OPA.

Given there's already a third party OPA integration https://github.com/thomasdarimont/custom-opa-openfga, it's not clear if we should invest on this in the near future.

Tuple AutoComplete in VS Code

Given a .fga.yaml file with models + tuples defined, whenever we write a test, we can auto-suggest the tuples values.

For example, if there's a tuple that has user = employee:anne, whenever I write a check test and I type employee: I should get anne as an option.

Terraform Support

It could be useful to provide Terraform modules so adopters can deploy OpenFGA in different cloud providers.

However, given we already have support for deploying to Kubernetes with through our Helm Charts, and that customers will still need to heavily customize the terraform module for their production environments, it's not clear this is worth doing.

This repository has a simple terraform module that can be used to deploy to AWS https://github.com/craigpastro/terraform-aws-openfga.

SCIM Adapter

As a developer I want to reuse the user/groups relationships already defined in a user directory

SCIM provides a way to synchronize directory data across applications.

When deploying OpenFGA, you would want to have the relationships between users/groups that exist in the user directory to be in sync with the FGA Store.

This can be achieved by implementing a SCIM adapter that can subscribe to changes in a SCIM-enabled directory (e.g. Okta's).

These projects can serve as an inspiration:

Caching in SDKs

OpenFGA can benefit from multiple layers of cache. One of them is at the client-level.

We want to simplify client-side caching by implementing it on the SDKs.

Performance Testing for Postgres Adapter

Even if we are happy with the current state of the Postgres adapter, and customers have been reporting that they are using it in production without issues, we want to perform additional performance/stress testing.

Visual Studio Code Plugin

  • Implement a language service for Visual Studio Code that provides syntax highlighting, autocompletion, etc.
  • Provide simple way to declare assertions.
  • Provide a way to easily run assertions, or run a batch of check() calls, to verify that the model still works as expected.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.