Giter Club home page Giter Club logo

authentication-panel's Introduction

Solid

Solid Logo

Re-decentralizing the Web

Solid is a proposed set of standards and tools for building decentralized Web applications based on Linked Data principles.

Read more on solidproject.org.

authentication-panel's People

Contributors

acoburn avatar amigus avatar bblfish avatar csarven avatar dmitrizagidulin avatar elf-pavlik avatar endlesstrax avatar ewingson avatar jaxoncreed avatar justinwb avatar matthieubosquet avatar mitzi-laszlo avatar nseydoux avatar pmcb55 avatar tallted avatar woutermont avatar zenomt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

authentication-panel's Issues

Is a Solid client on an Arduino Uno achievable?

I can't get myself into much more in the project, but it seems like this question has quite a lot of interesting implications, so it might be a topic that the panel should discuss.

So, first Arduino Uno is a cheap and small microcontroller. It has 2 kB RAM, 1 kB EEPROM and 32 kB of Flash. And yet, it can do quite a lot. I have Web servers on a couple, so that I can pull data from it.

It would be interesting to take it a step further, to have a Solid client on it that be authorized to write to parts of my pod, in which case, it would be push, not just pull. With the constraints it has, it will be pretty hard, but therefore also interesting.

One issue is how to identify it and authorize it. I think that we could perhaps just add something to Solid that could mint a URI for it, so, you wouldn't have a full WebID for it, merely a URI that can be used in ACLs. It could be accompanied by a shared secret, a token that could be flashed into the EEPROM.

It seems difficult to implement TLS over it though. So, could we possibly do something of lighter weight? Just pass JWT across the network with symmetric crypto based on a shared secret between the Arduino and the Solid server? The Solid server would then have a trigger that would decrypt the message and possibly perform some semantic lifting before a representation is created?

Decide on header/mechanism for reporting auth metadata for a Solid pod (RS)

As both the authentication and authorization/acl systems for Solid evolve, it would be helpful to standardize on (and document) some way to inform clients as to which system (and spec version) a particular storage pod (Resource Server, in oauth2 terms) is running.

For example, currently, we use the scope param of the WWW-Authenticate response header, to help solid-auth-client figure out which auth mode a Solid server is running in (WebID-TLS-only, or WebID-OIDC).

So, for node-solid-servers running in WebID-OIDC mode, on a 401 Unauthorized response, the server returns the following header:

WWW-Authenticate: Bearer realm="<pod serverUrl>", scope="openid webid"

The scope maps to, roughly:

  • openid webid === WebID-OIDC mode
  • tls webid === legacy WebID-TLS-only mode

(This was included to help solid-auth-client decide on how to interface with the server).

So, we have several questions before us:

  • Should we have a mechanism that denotes which authentication / authorization system and version a Solid server is running?
  • What should that mechanism be? (Typically, this is done via the WWW-Authenticate response header, so if we continue going with that, what should the parameter be?)
  • The WWW-Authenticate header spec tends to conflate authentication and authorization mechanisms into one -- how do we separate the two? (The authorization mechanism does not need to be communicated.)
  • Where should we keep a registry of the various supported systems / spec versions?

Allow the Implicit Flow or Ban It

Background:

The OAuth Implicit Grant (https://auth0.com/docs/flows/concepts/implicit) is vulnerable to Cross Side Scripting attacks, but is still an OAuth standard. The Authorization Code Grant with PKCE (https://auth0.com/docs/flows/concepts/auth-code-pkce) is used for the same use case as the Implicit Grant but is not vulnerable to XSS

Pros/Cons of removing it:

  • Pros:
    • Ensures that no Solid compliant client or server uses this grant
  • Cons:
    • Some app developers may be used to using this grant and dislike its removal in the Solid spec
    • Keeping the implicit grant would make Solid’s OAuth more in line with traditional OAuth

Use OAuth.xyz

Notes from Justin Richer

The current OIDC use in Solid required dynamic registration which, for the Solid use case, is an unneeded step. https://oauth.xyz/ might be a better fit

Clarify use of Bearer token

https://github.com/solid/authentication-panel/blob/master/oidc-authentication.md#resource-access

Ephemeral clients MUST use DPoP-bound Access Tokens, however, the RS MAY allow registered clients to access resources using a traditional Bearer tokens.

I don't see how RS would know if client registered with OP or not. If don't make DynReg optional for the client #56 than all clients will be actually registered. Even if client is registered with OP I don't think this on its own enables it to use Bearer tokens.

Clarify requirement and use of PoP tokens in WebID-OIDC protocol

The current WebID-OIDC specification describes the use of Proof of Possession tokens, but many details are left out.

  1. Is the cnf claim required?
  2. Does the cnf claim belong in the body or header of the JWT?
  3. Does the key field belong in the body or header of the ID token (or access token #65)
  4. It is clear that an RS must reject a token with certain mismatched data, but must it also reject a token that doesn't contain the cnf claim at all? (i.e. a non-PoP token; effectively, a downgrade attack)
  5. Is there a "token_type": "pop" claim requirement as in the examples?
  6. Should the id_token claim be renamed? (c.f. #65)

Reference: https://tools.ietf.org/html/draft-fett-oauth-dpop-02

consider handling multiple realms per origin

according to discussion in #1, there is currently no support in solid auth clients for multiple realms/protection spaces per origin. the current POP token construction implies that the same token can be used in any protection space at an origin.

addressing this can be done entirely on the client side today, by paying attention to the realm parameter of the WWW-Authenticate response header in a 401, and taking care to differentiate and track by realm if an access token is rejected for some reason (for example, if it was revoked in one protection space).

it would also be handy if access tokens for different protection spaces had to be different, for example by doing #3 or by obtaining an access token from an authorization server instead of making one in the client.

at the very least, multiple realms per origin should not be prohibited, and documentation should acknowledge that it is a valid case in HTTP and clients should take care.

consider the discussion beginning at #1 (comment) to be incorporated by reference as though fully set forth in this issue.

Clarification needed on Ephemeral Identity Providers

In the proposed spec, it says:

In a decentralized ecosystem, such as Solid, the IdP may be:

The user
Any client application, or,
Preexisting IdPs and vendors

The "Preexisting IdPs and vendors" use case is the one we talk about the most, but the other two are confusing and need clarification.

When "The user" is mentioned, is this the self-signed auth flow? If so, the self-signed auth flow will need to be defined in this specification as the current one isn't adequate for Solid's use case. If not, what does it mean.

I do not know a use case where a "client or application" serves as the IdP. Could that be clarified.?

consider methods to obtain access tokens from an authorization server instead of making them in the client

in the current POP token scheme, the client directly manufactures an access token to present to the server with the "Bearer" method. as discussed on the 2019-08-26 and other calls and in #1, i believe there are numerous problems with the current POP token scheme.

several issues raised in #1 can at least be partially addressed by approaches discussed in #3, #9, #10. a summary of the most important remaining problems specifically with the client making access tokens:

  • in the OAuth model, the client doesn't have standing to make the access tokens for an unrelated resource server. this right belongs to the resource server's authorization server. this may not withstand the scrutiny of security experts, and is incompatible with the kinds of security infrastructure likely to be deployed in enterprises.
  • the client chooses the validity period of the access token, for which it also doesn't have standing.
    • even with #10, there's no way for the resource server to have an access token that's valid for longer than the client proposes, which might be useful if the resource server is overloaded and wants to put off expensive (re)verification of aspects of the token, which could involve additional cryptographic and network accesses.
  • the format of the access token is mandated for all Solid-compatible servers, rather than being up to the resource server's authorization server. this might make it potentially incompatible, and potentially non-optimal, with the kinds of OAuth infrastructure likely to be deployed in enterprises.
    • also these tokens are necessarily quite big and must be passed around, or at least (with HTTP/2 compression) processed in some way, on every request.

obtaining an access token from the resource server's authorization server instead of using an access token made by the client addresses the above and has the following benefits:

  • the server chooses the validity period of the access token according to its security and operational requirements. the token lifetime might be shorter or longer than that proposed by the client.
    • the server can choose a validity period according to its own policy and security considerations.
    • the server can make the validity period longer than proposed by the client if the server is overloaded.
  • the server can choose the format of the token to be optimal for its implementation and intended scale of operation. this could include
    • the token being a short (but "long enough") key to a database record, which is simple to implement correctly and is appropriate for a modest volume. the token can be revoked by forgetting the database record.
    • the token being an encoding of relevant information (like the webid, appid, and validitity period) signed by the server, whose presentation can be verified using on-hand information only.
      • useful in high-volume and distributed systems where storing all active tokens in a database might be infeasible.
      • since revocation is rare, it is probably feasible to remember individual revoked tokens in a table until they expire, or to revoke all active tokens by iat or serial number or something. techniques like Bloom filters could be used in very high-volume deployments.
  • the multiple cryptographic and network steps involved in identity verification need only be done when new tokens are issued.
    • complex, fragile, and hard-to-get-right whole-token or intermediate verification caching strategies aren't needed for performant operation.
    • caching especially of the webid profile document might not be necessary, which can make the server more responsive to relevant changes to the profile.
  • in one proposed method, a resource server + authorization server, with minimal extra attributes in a 401 response, could provide an API endpoint to obtain an access token that is otherwise compatible with the server's existing Bearer-based authorization infrastructure.
  • the authorization server can support multiple methods for obtaining an access token (for example, a method using WebID-TLS as well as one based on WebID-OIDC).
  • conforms to the OAuth "Bearer" model, which might make security people happier.

with the following cost:

  • it requires an extra serialized HTTP transaction (to the authorization server) when a new access token is required for resources in a protection space.

The Mechanism for the IdP to confirm app WebId ownership should be detailed

Currently the spec states:

The client presents its WebID to the IdP and requests an Authorization Code.

But there is no requirement in the spec for IdPs to implement a mechanism that confirms the application's possession of that WebId. This is important because if an app can claim to be any WebId, it will mess up the access control system down the line.

Caching IdP public keys

I might be wrong about this, but wouldn't recommending the caching of public keys as stated here:

It is RECOMMENDED that the RS keep a list of trusted IdPs, to facilitate the expedient lookup of JWKS through local trust stores or cached public keys.

Be an anti-pattern as it would be recommended that IdPs rotate their keys frequently?

Proposed Authentication Upgrade Approach

Proposed Authentication Upgrade Approach

Scope

At the moment, the Solid ecosystem depends on an authentication token structure that was designed before the relevant draft specification was released by the IETF. In November, work began to replace this token.

The following outlines a proposed approach to move the ecosystem towards a more secure token while easing the migration for the affected existing components.

Background

The need to change the token was originally identified by Justin Richer and the new token is a central part of the current work on the Authentication specification. Given there was no official 1.0 spec for authentication, it was decided to make one with the new token at the center.

In parallel, work was undertaken to use the new token in developer libraries.

Problems with the Current Token

  1. The token was designed before a draft recommendation was released by IETF, so the format is different.
  2. The token is not future proof for verifiable credentials (VC). VCs are useful in access control use cases that are not based on a user’s identity. For example, a resource controller might want to make a document available only to people who can prove they are citizens of a European Union (EU) member state. It would check for a VC signed by an EU government verifying that the bearer of that VC is a citizen.
  3. The current token is not future-proofed to handle client constraining access control.
  4. When both a Pod and application are served over HTTP rather than HTTPS, the current token can be stolen and replayed.

Components Affected

  • Solid Authentication Spec
  • Solid-Auth-Client (The original authentication client for the web browser)
  • Solid-Auth-Fetcher (The new authentication client designed to replace solid-auth-client)
  • NSS
  • inrupt.net and solid.community deployments
  • Solid-Auth-Cli (The original authentication client for the server)
  • Any library that directly depends on Solid-Auth-Client or Solid-Auth-Cli
  • All authentication based dependencies of Solid-Auth-Client, and Solid-Auth-Cli

Upgrade Plan Success Criteria

The following are the proposed success criteria for an upgrade plan:

  • A normative Authentication specification has been approved by the Authentication Panel.
  • A client library using the new token has been developed.
  • Node Solid Server’s Identity Provider supports the new token.
  • The depreciation of all libraries that require deprecation (see below).
  • Application maintainers can upgrade their applications to use the new token with minimal effort (preferably by simply bumping the dependency version).
  • Servers provide a grace period (the current recommendation is 90 days) for application developers to upgrade their applications.
  • A communication plan has outlined a way to communicate these changes to the community

Approach for Each Component

Solid Authentication Spec

The process of writing a normative spec for approval by the authX panel is ongoing. While it is not explicitly required to begin implementations, it is required to confirm that implementations are conformant to the specification.

Solid-Auth-Client

Solid-Auth-Fetcher will replace Solid-Auth-Client as the primary auth client library in the solid ecosystem. While Solid-Auth-Client was designed to operate in the web browser, Solid-Auth-Fetcher is designed to operate in many environments with different interfaces. This will add some complexity to an application developer’s upgrade process.

To ease the transition, a wrapper for Solid-Auth-Fetcher with the same interface as Solid-Auth-Client will be provided. The wrapper should be made available on NPM as ‘solid-auth-client 2.0’ with a security warning to upgrade.

Solid-Auth-Client 2.0 should clear sessions that use the old token.

Solid-Auth-Fetcher

The first release of Solid-Auth-Fetcher should include server and browser compatibility. It should be compatible with the new token but should recognize when it is communicating with a server that only supports the old token and fail elegantly.

It would make sense to release Solid-Auth-Fetcher at the same time as solid-auth-client 2.0.

NSS

NSS must also be upgraded to work with the new token. The upgrade must use an identity provider that can issue both old and new tokens, and a storage server that can accept both old and new tokens. Server maintainers should be able to enable and disable support for each token type in the configuration.

Inrupt.net and solid.community deployments

inrupt.net and solid.community should be upgraded to the new version of NSS. The deployment should be configured to support both the old and new tokens for an agreed period, after which it should be changed to only work with the new token. This gives application developers time to upgrade their applications. The proposed duration for support of both tokens is 90 days.

Solid-Auth-Cli

Solid-Auth-Cli was the counterpart to Solid-Auth-Client that focused on login from the server. The interface for Solid-Auth-Cli is bad practice as it invites the developer to pass the username and password for the user. Therefore, Solid-Auth-Cli should be deprecated and replaced by Solid-Auth-Fetcher.

Libraries that depend on Solid-Auth-Client or Solid-Auth-Cli

There are multiple libraries that depend on Solid-Auth-Client or Solid-Auth-Cli. An incomplete list is as follows:

Those depending on Solid-Auth-Client should be upgraded to use Solid-Auth-Fetcher or Solid-Auth-Client 2.0. Those depending on Solid-Auth-Cli should use Solid-Auth-Fetcher.

Auth based dependencies on Solid-Auth-Client and Solid-Auth-Cli

Dependencies on Solid-Auth-Client and Solid-Auth-Cli should be deprecated. (seen below):

Communications

As this is a significant change, the approach should be approved by the auth panel initially, then a communications plan should be created and approved before communicating with the community.

Approach Summary

The diagram below reflects the current recommended approach for rolling out the auth upgrade to the Solid ecosystem:

https://www.lucidchart.com/invitations/accept/1a670674-6786-4a8d-822c-27f307633de7

Alternative protocols for resolving WebIDs

Background:

Currently in the WebID spec (https://www.w3.org/2005/Incubator/webid/spec/identity/#the-webid-http-uri) limits disco limits discovery to the http (or https) protocol. However, there are other possibilities for discovery including but not limited to:

  • IPFS (https://ipfs.io/) a distributed file system that could resolve to turtle documents.

  • Decentralized Identifiers - DID (https://w3c-ccg.github.io/did-spec/) a collection of different identifiers with a common interface. Some supported ecosystems can be seen here (https://w3c-ccg.github.io/did-method-registry/). Note the DID does include IPFS.

  • Pros:

    • Can handle cases where developers do not want to depend on DNS
    • In the case of DID, would allow a flexible interface to potentially support any future identity system.
  • Cons:

    • Diverges from the WebID standard
    • Resource servers would need to implement another protocol

Acknowledgments Section

Looking at various specs and the many different ways in which to write this section, this one appeals to me the most - https://www.w3.org/TR/2019/REC-vc-data-model-20191119/#acknowledgements. I like the straightforwardness of it. Though I'm completely open to suggestions on how we format this section.

Can we start compiling a list of names to add?

  • Dmitri Zagidulin
  • Paul Worrall
  • Michael Thornburgh
  • Justin Bingham
  • Jackson Morgan
  • Aaron Coburn
  • Matthias Evering
  • Henry Story
  • Jamie Fiedler
  • Davi Ottenheimer
  • Ricky White
  • Adam Migus

Alternative Authentication Flow

This is based on an offline discussion between @acoburn, @dmitrizagidulin, and @jaxoncreed. TLDR, it's an authentication flow that does not require token wrapping or crypto on the client. Instead, it depends on making a round trip to the OP every time the client wants to query a new RS.

The current WebID-OIDC spec uses PoP with a wrapped id_token, to a large extent because this allows the RP to scope an audience claim to a particular RS (thus addressing the issue of token exfiltration). Per OIDC, the audience claim of an ID token is the RP itself, so this mechanism allows us to direct that audience claim to a particular RS instead.

But if we are changing from using ID tokens to access tokens, we potentially have much more flexibility around the structure of those JWTs. What if, in addition to the standard ID token returned from the /token endpoint of an AS, the response contained an access token with this structure:

{
    "kid" : "AS key id"
}
.
{
    "sub" : "WebID of agent" ,
    "iss" : "AS/IdP identifier" ,
    "aud" : "RS/Pod identifier" ,
    "azp" : "RP/App identifier" ,
    "iat" : "..." ,
    "exp" : "..."
}
.
(AS-based-signature the above)

This would vastly simplify the token structure that an RS needs to validate and it eliminates the need for RPs to sign the outer envelope token. The downside is that, for each RS that a given RP interacts with, that RP would need an extra round-trip with the AS: either via the /token endpoint or some other endpoint where the RP could exchange an ID token for a signed, RS-scoped access token.

Technical Objections and Concerns with Proposed DPoP + VC Scheme

Technical Objections and Concerns with Proposed DPoP + VC Scheme

While a new authentication scheme using DPoP and Verifiable Credentials may technically still be a proposal, it has the outward appearance of a foregone conclusion. I have serious reservations and technical objections regarding the use of DPoP in a Solid authentication scheme, as well as concerns about proposed changes to OpenID Connect. I believe a different approach (for example, #12) is a better fit for Solid.

Problems with DPoP

  • draft-fett-oauth-dpop is not a "draft recommendation released by IETF"; in fact it has no formal status. It is an independent submission by its authors, is not a Working Group item (especially not the OAuth Working Group), and is not currently on a track to (eventually) represent a consensus of the IETF. An Internet-Draft is especially NOT a means of "publishing a specification" -- see Section 2.2 of RFC 2026 (BCP 9) for more specific information regarding the disposition of Internet-Drafts.

  • The syntax of DPoP inflates HTTP request headers with at least one extra JWT comprising at least hundreds of bytes. Because each request must have a fresh and unique proof-of-possession, this header is not compressible with HTTP/2 header compression. Transporting this extra data wastes natural resources (time and energy) and money. This waste will add up if Solid becomes widely used. Alternatively, OAuth2 bearer tokens can be compact (as an implementation detail of the Authorization Server), and can be compressed in HTTP/2 when reused for multiple requests.

  • DPoP requires expensive public-key cryptographic operations on every request by the client (signing) and server (verifying) even for multiple requests between the same client and server. This wastes natural resources and money. This requirement may be burdensome for resource-constrained clients (such as low-power or very slow devices). Alternative proposals employing proof-of-possession constructions can amortize the cost of signing and verifying over multiple requests between the same client and server.

  • Requiring a unique jti for every request (as a cornerstone of DPoP's security model) places an unreasonable burden on servers that must keep track of them, and complicates horizontal scaling of servers.

  • The unique jti chosen by the client, while it protects against replay, doesn't address the "happens-after guarantee of current possession" reason why you want to challenge a party to sign a nonce chosen by you (to exclude precomputation by an adversary).

  • DPoP is designed to address a very specific, non-Solid threat/usage model, where an AS issues an opaque-to-client access token to the client, for use with a confederation of Resource Servers that trust the AS but not each other (that is, where a Resource Server can be an adversary to another Resource Server in the confederation). As such, many of the semantics of DPoP are unnecessary for Solid, and its syntax and operational requirements are needlessly expensive.

Problems with OpenID Connect Provider Requirements

To encourage adoption of Solid authentication, ad hoc changes to OIDC (and especially compatibility-breaking changes and changes that will require more than trivial modifications to existing servers) should be minimized.

Optional Client Registration

I don't think optimizing or eliminating Dynamic Client Registration is a problem that needs to be solved for (and especially by) Solid. If there is a belief that this is really a problem, it should be taken up in OIDC, to be debated and resolved by competent experts there.

Concerns over OIDC Implicit Flow

If there are concerns with using the OIDC Implicit Flow, they should be taken up in the processes of OIDC, to be debated and resolved by competent experts there. For the greatest compatibility with OIDC Providers, Solid specs should instead recommend following the latest guidance from OIDC regarding login flow security, perhaps recommending (but not requiring) not using the Implicit Flow with sufficient and compelling justification.

Verifiable Credential in ID Token

The Verifiable Identity Credential is substantially just an ID token with a confirmation key. Unless divulging the client's client_id to a 3rd party Resource Server is a major problem, I think that merely requesting inclusion of a cnfirmation key in the id_token is a simpler ask for OpenID Connect Providers to implement, and in combination with the sub being the WebID and the App ID being in the aud, is sufficient. That being said, computation and delivery of a distinct (if substantially the same) VC is probably not insurmountably burdensome.

If there is to be a distinct Verifiable Identity Credential, then since it must be independently verifiable anyway, it should be in a separate OP response attribute, rather than be embedded as a claim in the id_token. This is congruent with the current syntax of OIDC responses. Additionally, having it separate is more compact to transmit to the client (especially as an attribute of a URL fragment identifier) since making it a claim in the id_token will incur an additional Base64 encoding with corresponding expansion.

Requirements related to OAuth Client Registration

This issue intends to provide space for focused discussion on various aspects of this specific issue. Few excerpts from discussion in #21

@jricher Instead of dynamic registration as in the current prototype, we should be using static registration or its equivalent to introduce a client to the AS. With the current prototype setup, registration is required to set up a client_id and associate it with the application keys. The downside is that for ephemeral applications, and even native applications that can get deactivated/uninstalled/abandoned, this leaves a lot of dangling registrations at an AS that will never be seen again. The client ID in OAuth2 allows an instance of software to be identified across multiple authorization requests, but it’s rare that a single application instance would ever ask for a second token. I believe that we can use technologies like PKCE and DPoP to fill in the functionality currently provided by DynReg. Coupled with this, we can use a WebID for the client ID, or use it to fetch/validate a client ID, and tie that to a set of display and key information for a client. Client IDs are public information, and any attacker could claim any client ID, but we can use WebID mechanisms to lock down the behavior of a given client ID such that any attacker would need to also have control over the appropriate URLs for an app.

@zenomt regarding Dynamic Registration: today, dynamic registration is used only between a client (such as a single-page browser app (SPA)) and the user's OpenID Provider. since the user is likely to be the same from use-to-use of the same SPA in the same browser on the same device, the SPA can (and many do) remember the client ID & secret (and other aspects of the openid-configuration) from run-to-run in that browser on that device. and as we discussed on the call this morning, my proposal doesn't require a dynamic registration with the Resource Server's Authorization Server either.

@jricher DynReg is needed in WebID-OIDC because the keying material used by the client is associated with the client's registration, and not with the access token itself. This doesn't make sense for ephemeral keys and in-browser clients. This is in fact where DPoP comes into play: the keys used by DPoP do not need to be pre-registered with the client, removing the need to have each instance of the client dynamically register itself with the target AS.

@zenomt i'm not sure what you mean by "target AS". in the current Solid POPToken scheme and in my proposal, dynamic registration is only done between the client app and the user's trusted OpenID Provider.

@dmitrizagidulin Currently, the WebID-OIDC protocol does not use the Dynamic Registration to register a client app's keys.
An ephemeral key pair for the client app is generated for each session, and is sent over to the IdP during the Authorization Request step (and returned in the ID Token). (These are the keys that are used for the PoP token etc).

So, currently, we're only using DynReg for a (throwaway) client_id, that's about it.

Then conversation goes little further but I think that last sentence from @dmitrizagidulin captures the current state of things.

I also see in #6

@jaxoncreed The current OIDC use in Solid required dynamic registration which, for the Solid use case, is an unneeded step. https://oauth.xyz/ might be a better fit

I think it would be useful if that suggestion relates to misunderstanding about registering client keys during Dynamic Registration or OAuth.xyz addresses something related to (currently undefined) requirements for client registration.

Last but not least @zenomt mentioned couple of times Stateless Client Registration. We could also document what role we see it can play.

This issue also seems directly related to authenticating clients #25 and identifiers for clients solid/authorization-panel#30

@dmitrizagidulin An ephemeral key pair for the client app is generated for each session, and is sent over to the IdP during the Authorization Request step (and returned in the ID Token). (These are the keys that are used for the PoP token etc).

In #25 I mention advantage of storing non-extractably CryptoKey compared to storing plain text client_id and secret. I understand that in WebID-OIDC you intended key pair to be generated by client each session. Did you intend client to store client_id and client_secret across those sessions, where it uses different key pairs? If true this aspect seems missing in your summary.

So, currently, we're only using DynReg for a (throwaway) client_id, that's about it.

I recall during one of the calls you mentioned NSS issuing both client_id and client_secret, which then can get used with refresh_token grant. At the same time I notice lack of client_secret in snippet from https://github.com/solid/webid-oidc-spec/blob/master/application-user-workflow.md#11-returns-successful-registration

Use HTTP-Signature instead of WebID-RSA

@dlongley and @msporny's draft-cavage-http-signatures-05 has a few implementations, and is written in a style that has chances to get adoption by the IETF. It is generic enough to satisfy a wide set of use cases. And there is strength in numbers. This suggests that we use that as the basis for WebID-RSA ( though it could do with a better name see issue 5 ).

Of course we need to see if it works for us. That is what I have been working on:

  • I have implemented a server version of Http Signature in not too much code [1],
  • I am now looking at the (browser) client side: the issue is that one does not get access to all the headers one would like to. So I opened issue 156 on the Fetch WhatWG github repo, to see if the Fetch API could give the client full access to the http headers as they are sent. Though it may be a long time before that wish comes to pass. So we need to see if we can do without full access to the headers.

What the SoLiD spec could suggest is a number of headers to use for authentication. Here is my first proposal. The SoLiD spec could suggest that client sign at least the following headers:

  • optional User header for the WebID, when the user has one. This avoids an intermediary adding a WebID to the request and pointing the WebID profile to the WebKey document. ( Assuming the WebKey document does not point to the WebID for reasons of privacy )
  • MUST Signature-Date (name open for discussion). This header is needed so that the message cannot be used in a replay attack - it forms a good nonce. The server could also verify that this date be within a couple of seconds at most of the actual Date header sent by the browser. We are doing this over TLS, but these headers could end up in the log, and those logs could be stolen.
  • signing the (request-target) seems like a good idea too. ( can that in some way be thought of as the nonce? )
  • signing the Host

The Authorization header would then look like this

Authorization: Signature keyId="https://joe.example/keys/key#",algorithm="rsa-sha256",\
headers="(request-target) host user signature-date",\
signature="Base64(RSA-SHA256(signing string))"

note:

  • the \ above indicate that it and the newline character that follow are just for display purposes
  • Base64(RSA-SHA256(signing string)) is a function on a signing string specified below which would
    be something like:
(request-target): get /profile\n
host: jane.name\n
user: https://joe.example/#
signature-date: 2015-11-10T17:39:25.192Z\n

What SoLiD adds to Http-Signature then is the definition of Signature-Date which gives the time of the Signature, with the requirement that it be no more than a few seconds out of sync with the real date, and the requirement of User for the WebID when the user has one (which is already something SoLiD uses).

This would then also require the WebId Working group at some point to define WebID verification given a key and such a header (not difficult: just need to dereference the WebID and see if it points to the key I think).

The Initial SoLiD server's request that could have launched this would then have looked like this:

401 Unauthorized
WWW-Authenticate: Signature realm="/",headers="(request-target) host user signature-date"
Access-Control-Allow-Credentials: true
Access-Control-Allow-Headers: Authorization,Host,User,Signature-Date
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: WWW-Authenticate

The definition of the Signature-Date header is the precise Time in ISO 8601 format w/ sub-millisecond precision when the header was signed. You can get this in JS with

new Date().toISOString()

This would need to be added to the IETF header registry referred to by section 8.1 of RFC 7230 on HTTP 1.1.

[1] The core verification code, without the HTTP header setting:

How does a user revoke a PoP Token for all resource servers

This issue has similar concert to the one @zenomt raised here #1 but from a different perspective.

It's true that the resource server can ignore a token it finds unsavory, but what if the following happens:

Alice logs into shadyapp.com and shadyapp.com receives a token so it can query Alice and Alice's friends Pods. Later Alice sees an article that says shadyapp.com is not to be trusted. So, she wants to globally revoke the token.

In a traditional oidc environment this is possible because we're only dealing with one resource server, but in a world where this token could represent Alice for every resource server in existence, it becomes harder.

One possible way to handle this is to replace step 8 here (https://github.com/solid/webid-oidc-spec/blob/master/application-user-workflow.md#8-requests-public-keys) with a different route that allows the resource server to send the token for the authorization server to confirm. However, this solution negates any efficiency improvements through caching.

Proposal: Standardize the WebId's content as only auth-specific

This has been talked about before, but I don't think an official issue has ever been made for this:

Problem

Currently, the WebId is considered the user's profile. It includes their name, their image, and other things about their person. In addition, it includes their authentication information like their OIDC Issuer and their certs.

This presents a problem: authentication information must always be public as it's needed for entities to confirm identity ownership. However, profile information could be public or private depending on the user's preferences, but putting it in the WebId requires this information to be public.

Note that it still would make sense to have triples related to discovery (like a pointer to a user's inbox) in the WebId, but that is out of the scope of the auth spec.

Proposal

The auth spec should dictate the minimum number of things that MUST be in the WebId, and those things should only pertain to Authentication.

The following is what I think a WebId should look like:

@prefix : <#>.
@prefix solid: <http://www.w3.org/ns/solid/terms#>.
@prefix cert: <http://www.w3.org/ns/auth/cert#>.
@prefix xsd: <http://www.w3.org/2001/XMLSchema#>.
@prefix example: <https://example.com#>

:me
    a example:AuthenticatableAgent, example:OIDCAuthenticatableAgent, example:RSAPublicKeyAuthenticatableAgent;
    cert:key
        [
            a cert:RSAPublicKey;
            cert:exponent 65537;
            cert:modulus
                "AB564BF3F36A712A6D17CE87EE49185D802DAF82313C925D51E82ED618200CFDF1542717F41A6D39C01726967A40A170547B050540A089B61A4143DBD4E360EBAC6F086F37A40CDAE61F33AE2181A187B3BE861D9ABF8A439532D0B4DAAB83686508CFB88627F77A8F0D117231521AE095334B28CAEC8FD2928C8A29CB15C38C27DA8B9426478BFB00CED71FB1904C9B0D27E2C4FF9F37882A917BD54957D4D9215E3625B8E195CCF2E8B18A528F9E4D1A19E525AF54CDB0804599DA9786D210AA04821977C7AF8F9C03BA1094F695A19F3C4B52DE9FC11ED14616559FC1DE0C610FBDC0F0DE5D817C417A4A5E6AC3FCD1C7B3F6B574BAFBD36E4B23164CE7D9"^^xsd:hexBinary
        ];
    solid:oidcIssuer <https://solid.community>.

Three new terms are added in this proposal: example:AuthenticatableAgent, example:OIDCAuthenticatableAgent, and example:RSAPublicKeyAuthenticatableAgent. Each of these dictate the way an agent can sign in and exist to help clients determine the methods available.

Eliminating a Legacy OIDC Discovery Pattern

And while we're at it, it would also make sense to get rid of the ability to discover the OIDC provider from the headers (https://github.com/solid/webid-oidc-spec/#authorized-oidc-issuer-discovery). The primary way to discover an OIDC provider should be via the WebId document as it is more in-line with linked data.

substantial and significant security issues with current "bearer token" scheme

as i mentioned on the mailing list on july 12, i believe there are substantial and significant security issues with the currently implemented (with documentation waiting to be merged) Proof-of-Possession "bearer token" authorization scheme. the most important issues are:

  • the client creates and issues "bearer tokens" with no input from the resource server (or its authorization infrastructure). in general, the client has no standing in the OAuth model to create or issue an access token. this is incompatible with the kinds of OAuth infrastructure likely to be deployed in enterprises.
  • there’s no reasonable way for the resource server (or its authorization infrastructure) to revoke one of these tokens.
  • the validity period (nbf, exp) is solely under control of the webid’s OIDC Issuer and the client/app, another aspect over which those parties don’t have standing. the resource+authorization server is the only party with standing to specify the validity period of its access tokens, which might be shorter or longer than the validity period in any identity proof provided by the client.
  • the format of the bearer token is mandated, rather than being up to the resource and authorization servers. this makes it potentially incompatible (or at least non-optimal) with the kinds of OAuth infrastructure likely to be deployed in enterprises.
  • there’s no provision for multiple protection spaces (realms) with different security policies at the same origin.
  • there’s no way for the resource server to force a current/timely proof of possession of the confirmation key.
  • (related) there’s no way for the resource server to directly cryptographically challenge the client (for example, with a salt/nonce).

also, these proof tokens are necessarily big and must be passed around on every request. HTTP/2 header compression might mitigate that somewhat over the network between a client and a server (or at least a reverse proxy or application gateway), but at some point in the processing chain a multi-K blob of bytes needs to be processed on every request.

some time ago i proposed an alternative authorization method that addresses the above concerns.

Protection against brute force attacks

I just did a superficial analysis of an unsuccesful brute force attack against my personal Wordpress install. The interesting thing about it isn't really how it was done, but the fact that they hammered a single-user, tiny blog with less than 10 posts for several hours with 5 requests a second. I didn't think any reasonably intelligent attacker would bother, it was probably a bigger hog on their resources than mine...

However, it did lead me to think that we should see if there's something we should do to mitigate brute force attacks on the IdP spec level. Perhaps it is an implementation detail, but I wanted to bring it up for discussion if there is something that can be done.

Authenticating Clients

I would like to clarify how, when and with who clients have to authenticate. OAuth provides couple of mechanisms for Client to Authenticate with Authorization Server. Also for Sender Constrained (bound) tokens, Client proves possession of private key to Resource Serve, which we can also consider as a from of authenticating (possibly relying on different way to identifying the client solid/authorization-panel#30)

During Authorization Code flow we rely on redirect_uri discussed in #22 and we also should document in one place how we rely on it for identifying and authenticating clients.

I think we also need to clarify client authentication when using refresh tokens, especially if we consider issuing client_secret and expecting client to authenticate using client_id & client_secret. For fully in browser applications, non extractable CryptoKey provides more secure way of keeping secret than plain text client_secret and since we use Client Constrained (bound) tokens we require clients generate private key anyways. We could use Assertion Framework for OAuth 2.0 Client Authentication and Authorization Grants for clients to also use their private key for authenticating with authorization server, in that case we could possibly rely on Client Credentials Grant and don't even need to use refresh tokens.

DPoP Validation

Current draft states:

The encoded HTTP response and resource URL of the DPoP token MUST be dereferenced and matched against the original request in the Access Token.

I fail to understand what exact steps it requires. Does resource URL of the DPoP token refer to htu claim in DPoP Proof? What does original request in the Access Token mean?

Possibly strictly using terminology

  • DPoP Proof
  • DPoP-bound Access Token

And specific JWT claims could help to eliminate any confusion arising.

Detailing HTTP-Signature based authentication for Solid

I think it would be useful to detail HTTP-Signatures based authentication for Solid. I wrote an implementation for it a few years ago for the server, and one before that for the client using Web-Crypto.

Not much is needed in addition to the existing spec. I can think of the following:

  1. A definititon of a Web-Key: a URL that names a public key, that can be either
    a) located in the WebID doc
    b) located elsewhere in case there is a desire to not tie a key to a webid.
  2. A verification procedure where the server fetches the public WebKey to check the client
    a) if the wACL identifies that Agent via that WebKey that that is all that is needed
    b) if the wACL identifies the Agent via a WebID then the client needs to send the WebId in the request. If the situation is 1a above, then that is all that is needed. If 1b above then there needs to be a proof tying the WebId to the WebKey.

Authenticating Users

While #25 focuses on authenticating Clients, here we can focus on authenticating Users. Preferably we can start following up with PRs for specific findings. In #2 we have some relevant points as well, I think we can extract ones related to user authentication here and close #2 which served as meeting notes.

  • Authenticating User with Resource Servers vs. Authenticating User with OIDC Provider - I think we have general consensus that while OP can provide freely wast diversity of authentication mechanisms (user can independently choose one's own OP), for authenticating with RS we should have limited, possibly just one, method (user may need to use any number of group storage instances)

I can prepare PR for next Monday formulating above as group finding.

Authenticating with OIDC Provider

  • Biometrics authentication (from #2)
  • WebAuthn (from #2)
  • WebID-TLS (from #2)
  • Multi User Device auth (with alternative sensors ie no keyboard or mouse) (from #2)
  • When using Clients which need to use Device Code Grant or Client Credentials

Authenticating with Resource Servers

  • WebSocket authentication (and authorization)
  • Opaque authentication (accept claims without telling a resource server who you are)

Authenticating with Client

  • User display information - WebID Profile
  • Multiple users using same client in the same operating system session (eg. iOS)

Authenticating with Authorization Servers

  • Comes up in approaches where we have Authorization Server other than User's OP

Research Existing Formalisations of OAuth Protocols

It would help to have a place to collect formalization of the (relevant parts of) the OAuth Protocols.

We can do this in this issue/question or create a wiki page to collect them. These could help build a formalization for OAuth as applied to Solid, which may help resolve some thorny issues, clarify what needs doing, etc... We'll only know when we know what has already been done.

Formalize WebID-OIDC Authentication protocol

There are, at present, two authentication protocols mentioned in The Solid Ecosystem (TSE) document: WebID-OIDC and WebID-TLS. The WebID-TLS specification has already been formalized (as a draft) within W3C. The current WebID-OIDC document, however, has not been formalized into a similarly structured document with normative language.

From an implementation perspective, it would be helpful if the protocol were more precisely formalized.

Formalize JWT access token structure in WebID-OIDC protocol

Under OIDC, ID tokens must be structured as a JSON Web Token (JWT), which gives structure to those tokens. With OIDC, access tokens are often structured as JWTs, but that is not a hard requirement. If the WebID-OIDC protocol moves toward using access tokens in the interaction between a relying party and a resource server, it may make sense to formalize the structure of these tokens as JWTs with certain required fields.

how are redirect_urls authenticated?

As I understand identifying Apps indirectly via the Origin header is now no no longer correct, and one should use the redirect_url of OAuth as an identifier for the app. It is clear that Origins are much too broad to identify a single application.

I am not sure though how the OAuth Authorization server authenticates the app? How does the OAuth server distinguish one app from an origin from another?

Validating the Access Token

Current draft states:

The public key in the fingerprint of the Access Token MUST be checked against the DPoP fingerprint to ensure a match, as outlined in the DPoP Internet-Draft. To achieve this the RS must fetch the required pubic key from the IdP to match the Access Tokens cnf claim against the DPoP tokens fingerprint to verify they are a bound pair.

I think we should keep it very clear that we have two different signatures which need to be validated:

  • DPoP Proof signed by the client
  • JWT Signature on DPoP-bound access token signed by the IdP

Given above I don't see exactly how mentioned in current draft public key from the IdP to match the Access Tokens cnf claim supposed to work. cnf claim uses client public key not idp public key.

In relevant part of @dmitrizagidulin's draft, difference between what client signs and what idp signs was captured very well.

add a method for the resource server to constrain the validity period of access tokens

in the current POP token scheme, the client manufactures an access token to present to a server. the client chooses how long this token is valid for (nbf and exp), constrained only by the validity of the id_token obtained from the client's OpenID Provider.

this is problematic for at least three reasons:

  1. there's no reasonable way for the server to indicate to the client that validity period is longer than the server would prefer for security reasons
  2. should the server decide to revoke or otherwise not honor an access token, it might need to remember something about the token (for example, the entire token, or a hash of it, or its confirmation key, or the id_token or hash of it, or the webid) for at least as long as the validity period of the token, which could be longer than the server is prepared to remember it.
  3. there is no opportunity for the resource server to issue a token valid for longer than what the client determined. this might be useful in situations where the server is heavily loaded and wants to put off expensive (re)verification of the identity associated with the token, which can involve additional cryptographic operations and network accesses.

possible ways of addressing this include (but are not limited to):

  1. implement #3 and include with the challenge an expires_in or similar (doesn't handle case 3 above though), and rejecting any token that includes the challenge and that has an exp after challenge's expiration date.
  2. obtain an access token from an authorization server instead of making one in the client, where the resource server determines the validity period of its access tokens directly, and communicates that to the client.

General Improvements

General ideas for improvement

  • Formalize the PoP token standard
  • Find the correct approach to do multi-RS authentication (try to make it as close to what currently exists)
  • Use of self signed authentication (without an identity provider)
  • Document differences between in-browser apps and services
  • Interop with other decentralized auth
    • DID Auth
  • Multi User Device auth (with alternative sensors ie no keyboard or mouse)
  • Biometrics authentication
  • Opaque authentication (accept claims without telling a resource server who you are)
  • Verifiable claims (property based authentication)
  • WebAuthn
  • WebID-TLS
  • HTTP2 enabled?
  • WebSocket authentication
  • Interop with SAML and ActiveDirectory (Other IDPs)
  • Keeping Storage system implementations simple (Don't have a ton of different ways for storage servers to confirm identity)

Consider adding a 'nonce' param to WWW-Authenticate response header

As mentioned by @zenomt in #1, it may be advantageous to:

  1. Add a nonce param to the Resource Server's 401 WWW-Authenticate response headers.
  2. Require WebID-OIDC tokens (bearer or PoP) to include/pass through that nonce param, for verification by the RS

Specifically, on a 401 Unauthorized http response, the Resource Server would include the authenticate header that would look something like:

WWW-Authenticate: Bearer realm="..." scope="..." nonce="abcd123"

Opening this issue as a reminder to discuss:

  • Whether to include this mechanism in whatever auth system WebID-OIDC evolves into
  • What to name that param (nonce or challenge or something else)
  • Add the requirements for its validation by the RS

Use of ID tokens in the WebID-OIDC protocol

In a typical interaction with an OIDC provider, both an access token and an ID token are presented to a client. It is typical for clients to then use the access token in subsequent interactions with resource servers (in this case, a Pod), but the current WebID-OIDC protocol describes using the ID token in the way one might ordinarily use an access token.

It would be helpful to clarify the semantics of these tokens w/r/t the token that is presented to a resource server and whether it would be better to use access tokens instead of ID tokens in this context.

Generalize WebID-OIDC protocol beyond OIDC

Some of the interactions defined in the WebID-OIDC protocol make it more similar to OAuth2 than a classic OIDC interaction. It may make sense to generalize the protocol, aligning it with OAuth2. If so, it should be decided which formalisms from OIDC would be carried over (e.g. endpoint discovery via /.well-known/openid-configuration)

Consider IdP to issue Identity Verifiable Credential rather than global access token

This issue relates to ensuring extensibility with AuthZ. Having IdP to issue signed JWT as DPoP-bound access tokens, which client uses with any number of resource servers, can limit possibility for parties other than IdP to stay responsible for authorization.

Previous Draft by @dmitrizagidulin had IdP issue Identity Verifiable Credential which client would present to RS. In authorization panel we discussed possibility where client would need to present Capability Credential as well, which could even be obtained from party different than IdP. I see pattern where specialized party issues specific credential to the client as elegant and providing solid foundation for extensibility.

UMA2.0 has aspect of Interactive Claims-Gathering. In Solid Identity VC, Capability VC, Membership VC etc. could act as standard claims which client could use while requesting access. I think we should take into consideration what @zenomt proposes #12 . In a way I read the general idea behind it, client could go through claims (credentials) gathering step with RS associated AS and obtain access_token to use with that RS. Having IdP to issue global access tokens would close that possibility, or even mentioned possibility of obtaining additional credentials from other parties. IdP issuing Identityt Verifiable Credential seems like a clear way of encapsulating its responsibility without limiting other parties in taking on their responsibilities.

Changes to the PoP Token flow

Notes from Justin Richer:

The current flow the PoP tokens can be kept with these changes:

  • Use access tokens rather than id tokens (Don't present an id token to a third party)
  • Change signatures to model DPoP or Cavage Signatures or HTTP signatures for OAuth Proof of Possession.

Proposal: Rename WebID-OIDC spec to Solid-OIDC

As we're starting to move towards an initial public draft of a 1.0 solid spec, now is the time to consider the naming of our authentication protocol.
This is a proposal to change the name of the authentication protocol (and the corresponding spec) to:

Solid-OIDC.

This would:

  • Lay the groundwork for using DIDs in addition to WebIDs, in the future (as well as any other appropriate identifiers)
  • Communicate that this is a Solid-specific variant of OpenID Connect 1.0 (which is, hopefully, as interoperable with OIDC 1.0 as possible).
  • Differentiate the name of the authentication protocol from the existing WebID-OIDC (since there are likely to be breaking changes).

Normative language is inconsistent in the terms mentioned

In DPoP Validation, checking the iat field is mentioned, but checking the htu and htm fields are not. I know that both are referenced in the DPoP spec, but it seems weird the iat, which is also mentioned is brought over into this spec, when the two arguably more important fields in DPoP are left unmentioned.

Proposed Initiative - Client Authentication

As discussed in the panel meeting on 7/13, an initiative for this panel is proposed to design and document the mechanism through which clients are authenticated as part of the Solid-OIDC protocol specification.

This initiative should be sure to work in conjunction with the authorization panel and interoperability panel, who will have a number of applicable and dependent use cases.

Proposal for keyed Access Tokens in a Distributed Network

Public Clients. Instead of dynamic registration as in the current prototype, we should be using static registration or its equivalent to introduce a client to the AS. With the current prototype setup, registration is required to set up a client_id and associate it with the application keys. The downside is that for ephemeral applications, and even native applications that can get deactivated/uninstalled/abandoned, this leaves a lot of dangling registrations at an AS that will never be seen again. The client ID in OAuth2 allows an instance of software to be identified across multiple authorization requests, but it’s rare that a single application instance would ever ask for a second token. I believe that we can use technologies like PKCE and DPoP to fill in the functionality currently provided by DynReg. Coupled with this, we can use a WebID for the client ID, or use it to fetch/validate a client ID, and tie that to a set of display and key information for a client. Client IDs are public information, and any attacker could claim any client ID, but we can use WebID mechanisms to lock down the behavior of a given client ID such that any attacker would need to also have control over the appropriate URLs for an app.

DPoP with ephemeral keys. Instead of the keys being registered in one-time-use client entries, DPoP allows a client to present an ephemeral key at token request time. These keys don’t have to be strongly associated to a client’s identity, but they tie a token request to subsequent token presentations. This prevents a token from being replayed by another party, such as a downstream RS. This key binding lasts as long as the token.

JWT Profile. In addition to the key presentation mechanism of DPoP, we need a way for the RS to figure out who the token’s for and what software is presenting it. We can define JWT claims for the user and client software, and define in the Solid profile that these are WebID’s and how these are to be dereferenced. This would allow the RS to validate the token’s presentation mechanism, which binds to the client, and the token’s own signature, which binds to the AS, and the WebID’s for the user and client, which validate those parties and allow authorization decisions. This JWT access token would replace the use of the ID Token in the current process (and therefore no longer depend on OIDC either), and would need to comply with the various BCP’s for JWTs that are floating around. 1 2 3

Structured Scopes. Proposed work by Torsten Lodderstedt. This allows us to define the kinds of information being requested and authorized by the protocol, and as Dmitry pointed out, the current thinking in the OAuth world aligns with the WAC concepts in Solid.

Alignment with existing infrastructure would take the form of an identity broker (for users on their way in) or an application gateway (for APIs on the way out) to bridge between the Solid ecosystem and traditional OAuth/OIDC/SAML systems.

And finally, much of this approach aligns with ongoing work in the XYZ Project, which is seeking to patch a lot of the problems in the OAuth protocol and its extensions.

Verifying client's control of claimed WebID (client_webid)

Current draft states:

  1. The client presents its WebID to the IdP and requests an Authorization Code.

How does client present its WebID, should it use some specific query parameter?

It also states:

The Access Token MUST be a JWT and the IdP MUST embed the client's WebID in the Access Token as a custom claim. This claim MUST be named client_webid.

I don't see in a draft how IdP verifies that clients actually controls that WebID (and not tries to impersonate it). We discussed as one of possibilities that WebID Document returned when client's WebID gets resolved, would include some kind of solid:redirect_uri statement to associate that client WebID with redirect URI.

Client Registration does not state which entity it's optional for

Under proof of Identity:

Client registration (static or dynamic) is not required in Solid-OIDC, thus Access Tokens need to be bound to the client, to prevent token replay attacks. This is achieved by using a Distributed Proof-of-Possession (DPoP) mechanism at the application layer.

In practice, we need to know if registration is optional for the client or the IdP. If it is optional for the client, then the IdP MUST support both a flow with registration and without registration to accommodate clients that do not use a registration flow and clients that do. The same is true vice versa.

Furthermore, it was my understanding that this spec was designed to follow the implementation built by Inrupt, unless something has changed, I believe that implementation required at least dynamic client registration.

How can 401 responses get tied to a Launcher App

Issue 31: Accessing NonRDF-Sources directly via the browser brings up the question as to how a 401 returned by resources to browsers that don't have the capability to read wACLs, can then allow the user to get access to the resource.

For passwords the server can return a WWW-Authenticate header, which will get the browser to open a request for passwords. For other authentication systems, a web page is usually returned, allowing the user then to enter an OpenID for example. But this last one would not be able to allow the user to take advantage of the ACL logic of his Launcher App, and so require the user to choose haphazardly one of OpenId providers, even if none of those fit the ACLs.

Q: What methods could help here?

It would be nice to see how this could be tied to a LauncherApp, so that further informed authentication decisions can be made by it. Could a browser plugin help?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.