solid / authentication-panel Goto Github PK
View Code? Open in Web Editor NEWGitHub repository for the Solid Authentication Panel
License: MIT License
GitHub repository for the Solid Authentication Panel
License: MIT License
As mentioned by @zenomt in #1, it may be advantageous to:
nonce
param to the Resource Server's 401 WWW-Authenticate
response headers.Specifically, on a 401 Unauthorized http response, the Resource Server would include the authenticate header that would look something like:
WWW-Authenticate: Bearer realm="..." scope="..." nonce="abcd123"
Opening this issue as a reminder to discuss:
nonce
or challenge
or something else)Notes from Justin Richer
The current OIDC use in Solid required dynamic registration which, for the Solid use case, is an unneeded step. https://oauth.xyz/ might be a better fit
Background:
The OAuth Implicit Grant (https://auth0.com/docs/flows/concepts/implicit) is vulnerable to Cross Side Scripting attacks, but is still an OAuth standard. The Authorization Code Grant with PKCE (https://auth0.com/docs/flows/concepts/auth-code-pkce) is used for the same use case as the Implicit Grant but is not vulnerable to XSS
Pros/Cons of removing it:
As a reference to this issue: nodeSolidServer/node-solid-server#1266
Currently, the Solid spec does not explicitly define behaviour for a user who's WebID lists multiple OpenID issuers. The normative spec should indicate that when dereferencing a WebID's issuers a server must not reject a token if any of the issuers match the issuer in the token.
Current draft states:
The public key in the fingerprint of the Access Token MUST be checked against the DPoP fingerprint to ensure a match, as outlined in the DPoP Internet-Draft. To achieve this the RS must fetch the required pubic key from the IdP to match the Access Tokens cnf claim against the DPoP tokens fingerprint to verify they are a bound pair.
I think we should keep it very clear that we have two different signatures which need to be validated:
Given above I don't see exactly how mentioned in current draft public key from the IdP to match the Access Tokens cnf claim supposed to work. cnf claim uses client public key not idp public key.
In relevant part of @dmitrizagidulin's draft, difference between what client signs and what idp signs was captured very well.
While a new authentication scheme using DPoP and Verifiable Credentials may technically still be a proposal, it has the outward appearance of a foregone conclusion. I have serious reservations and technical objections regarding the use of DPoP in a Solid authentication scheme, as well as concerns about proposed changes to OpenID Connect. I believe a different approach (for example, #12) is a better fit for Solid.
draft-fett-oauth-dpop is not a "draft recommendation released by IETF"; in fact it has no formal status. It is an independent submission by its authors, is not a Working Group item (especially not the OAuth Working Group), and is not currently on a track to (eventually) represent a consensus of the IETF. An Internet-Draft is especially NOT a means of "publishing a specification" -- see Section 2.2 of RFC 2026 (BCP 9) for more specific information regarding the disposition of Internet-Drafts.
The syntax of DPoP inflates HTTP request headers with at least one extra JWT comprising at least hundreds of bytes. Because each request must have a fresh and unique proof-of-possession, this header is not compressible with HTTP/2 header compression. Transporting this extra data wastes natural resources (time and energy) and money. This waste will add up if Solid becomes widely used. Alternatively, OAuth2 bearer tokens can be compact (as an implementation detail of the Authorization Server), and can be compressed in HTTP/2 when reused for multiple requests.
DPoP requires expensive public-key cryptographic operations on every request by the client (signing) and server (verifying) even for multiple requests between the same client and server. This wastes natural resources and money. This requirement may be burdensome for resource-constrained clients (such as low-power or very slow devices). Alternative proposals employing proof-of-possession constructions can amortize the cost of signing and verifying over multiple requests between the same client and server.
Requiring a unique jti
for every request (as a cornerstone of DPoP's security model) places an unreasonable burden on servers that must keep track of them, and complicates horizontal scaling of servers.
The unique jti
chosen by the client, while it protects against replay, doesn't address the "happens-after guarantee of current possession" reason why you want to challenge a party to sign a nonce chosen by you (to exclude precomputation by an adversary).
DPoP is designed to address a very specific, non-Solid threat/usage model, where an AS issues an opaque-to-client access token to the client, for use with a confederation of Resource Servers that trust the AS but not each other (that is, where a Resource Server can be an adversary to another Resource Server in the confederation). As such, many of the semantics of DPoP are unnecessary for Solid, and its syntax and operational requirements are needlessly expensive.
To encourage adoption of Solid authentication, ad hoc changes to OIDC (and especially compatibility-breaking changes and changes that will require more than trivial modifications to existing servers) should be minimized.
I don't think optimizing or eliminating Dynamic Client Registration is a problem that needs to be solved for (and especially by) Solid. If there is a belief that this is really a problem, it should be taken up in OIDC, to be debated and resolved by competent experts there.
If there are concerns with using the OIDC Implicit Flow, they should be taken up in the processes of OIDC, to be debated and resolved by competent experts there. For the greatest compatibility with OIDC Providers, Solid specs should instead recommend following the latest guidance from OIDC regarding login flow security, perhaps recommending (but not requiring) not using the Implicit Flow with sufficient and compelling justification.
The Verifiable Identity Credential is substantially just an ID token with a confirmation key. Unless divulging the client's client_id
to a 3rd party Resource Server is a major problem, I think that merely requesting inclusion of a cnf
irmation key in the id_token
is a simpler ask for OpenID Connect Providers to implement, and in combination with the sub
being the WebID and the App ID being in the aud
, is sufficient. That being said, computation and delivery of a distinct (if substantially the same) VC is probably not insurmountably burdensome.
If there is to be a distinct Verifiable Identity Credential, then since it must be independently verifiable anyway, it should be in a separate OP response attribute, rather than be embedded as a claim in the id_token
. This is congruent with the current syntax of OIDC responses. Additionally, having it separate is more compact to transmit to the client (especially as an attribute of a URL fragment identifier) since making it a claim in the id_token
will incur an additional Base64 encoding with corresponding expansion.
There are, at present, two authentication protocols mentioned in The Solid Ecosystem (TSE) document: WebID-OIDC and WebID-TLS. The WebID-TLS specification has already been formalized (as a draft) within W3C. The current WebID-OIDC document, however, has not been formalized into a similarly structured document with normative language.
From an implementation perspective, it would be helpful if the protocol were more precisely formalized.
As discussed in the panel meeting on 7/13, an initiative for this panel is proposed to design and document the mechanism through which clients are authenticated as part of the Solid-OIDC protocol specification.
This initiative should be sure to work in conjunction with the authorization panel and interoperability panel, who will have a number of applicable and dependent use cases.
I just did a superficial analysis of an unsuccesful brute force attack against my personal Wordpress install. The interesting thing about it isn't really how it was done, but the fact that they hammered a single-user, tiny blog with less than 10 posts for several hours with 5 requests a second. I didn't think any reasonably intelligent attacker would bother, it was probably a bigger hog on their resources than mine...
However, it did lead me to think that we should see if there's something we should do to mitigate brute force attacks on the IdP spec level. Perhaps it is an implementation detail, but I wanted to bring it up for discussion if there is something that can be done.
Current draft states:
- The client presents its WebID to the IdP and requests an Authorization Code.
How does client present its WebID, should it use some specific query parameter?
It also states:
The Access Token MUST be a JWT and the IdP MUST embed the client's WebID in the Access Token as a custom claim. This claim MUST be named client_webid.
I don't see in a draft how IdP verifies that clients actually controls that WebID (and not tries to impersonate it). We discussed as one of possibilities that WebID Document returned when client's WebID gets resolved, would include some kind of solid:redirect_uri
statement to associate that client WebID with redirect URI.
Background:
Currently in the WebID spec (https://www.w3.org/2005/Incubator/webid/spec/identity/#the-webid-http-uri) limits disco limits discovery to the http (or https) protocol. However, there are other possibilities for discovery including but not limited to:
IPFS (https://ipfs.io/) a distributed file system that could resolve to turtle documents.
Decentralized Identifiers - DID (https://w3c-ccg.github.io/did-spec/) a collection of different identifiers with a common interface. Some supported ecosystems can be seen here (https://w3c-ccg.github.io/did-method-registry/). Note the DID does include IPFS.
Pros:
Cons:
This issue relates to ensuring extensibility with AuthZ. Having IdP to issue signed JWT as DPoP-bound access tokens, which client uses with any number of resource servers, can limit possibility for parties other than IdP to stay responsible for authorization.
Previous Draft by @dmitrizagidulin had IdP issue Identity Verifiable Credential which client would present to RS. In authorization panel we discussed possibility where client would need to present Capability Credential as well, which could even be obtained from party different than IdP. I see pattern where specialized party issues specific credential to the client as elegant and providing solid foundation for extensibility.
UMA2.0 has aspect of Interactive Claims-Gathering. In Solid Identity VC, Capability VC, Membership VC etc. could act as standard claims which client could use while requesting access. I think we should take into consideration what @zenomt proposes #12 . In a way I read the general idea behind it, client could go through claims (credentials) gathering step with RS associated AS and obtain access_token to use with that RS. Having IdP to issue global access tokens would close that possibility, or even mentioned possibility of obtaining additional credentials from other parties. IdP issuing Identityt Verifiable Credential seems like a clear way of encapsulating its responsibility without limiting other parties in taking on their responsibilities.
This has been talked about before, but I don't think an official issue has ever been made for this:
Currently, the WebId is considered the user's profile. It includes their name, their image, and other things about their person. In addition, it includes their authentication information like their OIDC Issuer and their certs.
This presents a problem: authentication information must always be public as it's needed for entities to confirm identity ownership. However, profile information could be public or private depending on the user's preferences, but putting it in the WebId requires this information to be public.
Note that it still would make sense to have triples related to discovery (like a pointer to a user's inbox) in the WebId, but that is out of the scope of the auth spec.
The auth spec should dictate the minimum number of things that MUST be in the WebId, and those things should only pertain to Authentication.
The following is what I think a WebId should look like:
@prefix : <#>.
@prefix solid: <http://www.w3.org/ns/solid/terms#>.
@prefix cert: <http://www.w3.org/ns/auth/cert#>.
@prefix xsd: <http://www.w3.org/2001/XMLSchema#>.
@prefix example: <https://example.com#>
:me
a example:AuthenticatableAgent, example:OIDCAuthenticatableAgent, example:RSAPublicKeyAuthenticatableAgent;
cert:key
[
a cert:RSAPublicKey;
cert:exponent 65537;
cert:modulus
"AB564BF3F36A712A6D17CE87EE49185D802DAF82313C925D51E82ED618200CFDF1542717F41A6D39C01726967A40A170547B050540A089B61A4143DBD4E360EBAC6F086F37A40CDAE61F33AE2181A187B3BE861D9ABF8A439532D0B4DAAB83686508CFB88627F77A8F0D117231521AE095334B28CAEC8FD2928C8A29CB15C38C27DA8B9426478BFB00CED71FB1904C9B0D27E2C4FF9F37882A917BD54957D4D9215E3625B8E195CCF2E8B18A528F9E4D1A19E525AF54CDB0804599DA9786D210AA04821977C7AF8F9C03BA1094F695A19F3C4B52DE9FC11ED14616559FC1DE0C610FBDC0F0DE5D817C417A4A5E6AC3FCD1C7B3F6B574BAFBD36E4B23164CE7D9"^^xsd:hexBinary
];
solid:oidcIssuer <https://solid.community>.
Three new terms are added in this proposal: example:AuthenticatableAgent
, example:OIDCAuthenticatableAgent
, and example:RSAPublicKeyAuthenticatableAgent
. Each of these dictate the way an agent can sign in and exist to help clients determine the methods available.
And while we're at it, it would also make sense to get rid of the ability to discover the OIDC provider from the headers (https://github.com/solid/webid-oidc-spec/#authorized-oidc-issuer-discovery). The primary way to discover an OIDC provider should be via the WebId document as it is more in-line with linked data.
It would help to have a place to collect formalization of the (relevant parts of) the OAuth Protocols.
We can do this in this issue/question or create a wiki page to collect them. These could help build a formalization for OAuth as applied to Solid, which may help resolve some thorny issues, clarify what needs doing, etc... We'll only know when we know what has already been done.
in the current POP token scheme, the client manufactures an access token to present to a server. the client chooses how long this token is valid for (nbf
and exp
), constrained only by the validity of the id_token
obtained from the client's OpenID Provider.
this is problematic for at least three reasons:
possible ways of addressing this include (but are not limited to):
expires_in
or similar (doesn't handle case 3 above though), and rejecting any token that includes the challenge and that has an exp
after challenge's expiration date.In a typical interaction with an OIDC provider, both an access token and an ID token are presented to a client. It is typical for clients to then use the access token in subsequent interactions with resource servers (in this case, a Pod), but the current WebID-OIDC protocol describes using the ID token in the way one might ordinarily use an access token.
It would be helpful to clarify the semantics of these tokens w/r/t the token that is presented to a resource server and whether it would be better to use access tokens instead of ID tokens in this context.
As both the authentication and authorization/acl systems for Solid evolve, it would be helpful to standardize on (and document) some way to inform clients as to which system (and spec version) a particular storage pod (Resource Server, in oauth2 terms) is running.
For example, currently, we use the scope
param of the WWW-Authenticate
response header, to help solid-auth-client
figure out which auth mode a Solid server is running in (WebID-TLS-only, or WebID-OIDC).
So, for node-solid-servers running in WebID-OIDC mode, on a 401 Unauthorized response, the server returns the following header:
WWW-Authenticate: Bearer realm="<pod serverUrl>", scope="openid webid"
The scope maps to, roughly:
openid webid
=== WebID-OIDC modetls webid
=== legacy WebID-TLS-only mode(This was included to help solid-auth-client decide on how to interface with the server).
So, we have several questions before us:
WWW-Authenticate
response header, so if we continue going with that, what should the parameter be?)WWW-Authenticate
header spec tends to conflate authentication and authorization mechanisms into one -- how do we separate the two?General ideas for improvement
As we're starting to move towards an initial public draft of a 1.0 solid spec, now is the time to consider the naming of our authentication protocol.
This is a proposal to change the name of the authentication protocol (and the corresponding spec) to:
Solid-OIDC.
This would:
Under proof of Identity:
Client registration (static or dynamic) is not required in Solid-OIDC, thus Access Tokens need to be bound to the client, to prevent token replay attacks. This is achieved by using a Distributed Proof-of-Possession (DPoP) mechanism at the application layer.
In practice, we need to know if registration is optional for the client or the IdP. If it is optional for the client, then the IdP MUST support both a flow with registration and without registration to accommodate clients that do not use a registration flow and clients that do. The same is true vice versa.
Furthermore, it was my understanding that this spec was designed to follow the implementation built by Inrupt, unless something has changed, I believe that implementation required at least dynamic client registration.
This issue has similar concert to the one @zenomt raised here #1 but from a different perspective.
It's true that the resource server can ignore a token it finds unsavory, but what if the following happens:
Alice logs into shadyapp.com
and shadyapp.com
receives a token so it can query Alice and Alice's friends Pods. Later Alice sees an article that says shadyapp.com
is not to be trusted. So, she wants to globally revoke the token.
In a traditional oidc environment this is possible because we're only dealing with one resource server, but in a world where this token could represent Alice for every resource server in existence, it becomes harder.
One possible way to handle this is to replace step 8 here (https://github.com/solid/webid-oidc-spec/blob/master/application-user-workflow.md#8-requests-public-keys) with a different route that allows the resource server to send the token for the authorization server to confirm. However, this solution negates any efficiency improvements through caching.
I know that it's only supposed to be a non-normative basic flow, but I don't think it's detailed enough to understand everything that's going on. I created a non-normative flow for an older version of the spec here https://github.com/solid/authorization-panel/blob/master/proposals/TrustedAppsReplacement.md#general-networking-flow. It would be fantastic to have that level of detail!
I think it would be useful to detail HTTP-Signatures based authentication for Solid. I wrote an implementation for it a few years ago for the server, and one before that for the client using Web-Crypto.
Not much is needed in addition to the existing spec. I can think of the following:
Consider implementing the identity token pattern for resolving webIds
Under OIDC, ID tokens must be structured as a JSON Web Token (JWT), which gives structure to those tokens. With OIDC, access tokens are often structured as JWTs, but that is not a hard requirement. If the WebID-OIDC protocol moves toward using access tokens in the interaction between a relying party and a resource server, it may make sense to formalize the structure of these tokens as JWTs with certain required fields.
Notes from Justin Richer:
The current flow the PoP tokens can be kept with these changes:
In the proposed spec, it says:
In a decentralized ecosystem, such as Solid, the IdP may be:
The user
Any client application, or,
Preexisting IdPs and vendors
The "Preexisting IdPs and vendors" use case is the one we talk about the most, but the other two are confusing and need clarification.
When "The user" is mentioned, is this the self-signed auth flow? If so, the self-signed auth flow will need to be defined in this specification as the current one isn't adequate for Solid's use case. If not, what does it mean.
I do not know a use case where a "client or application" serves as the IdP. Could that be clarified.?
I can't get myself into much more in the project, but it seems like this question has quite a lot of interesting implications, so it might be a topic that the panel should discuss.
So, first Arduino Uno is a cheap and small microcontroller. It has 2 kB RAM, 1 kB EEPROM and 32 kB of Flash. And yet, it can do quite a lot. I have Web servers on a couple, so that I can pull data from it.
It would be interesting to take it a step further, to have a Solid client on it that be authorized to write to parts of my pod, in which case, it would be push, not just pull. With the constraints it has, it will be pretty hard, but therefore also interesting.
One issue is how to identify it and authorize it. I think that we could perhaps just add something to Solid that could mint a URI for it, so, you wouldn't have a full WebID for it, merely a URI that can be used in ACLs. It could be accompanied by a shared secret, a token that could be flashed into the EEPROM.
It seems difficult to implement TLS over it though. So, could we possibly do something of lighter weight? Just pass JWT across the network with symmetric crypto based on a shared secret between the Arduino and the Solid server? The Solid server would then have a trigger that would decrypt the message and possibly perform some semantic lifting before a representation is created?
It may be nice to try integrating SQRL authentication. https://www.grc.com/sqrl
It is essentially a decentralized authentication scheme and may fit in nicely with the principles of SOLID.
This issue intends to provide space for focused discussion on various aspects of this specific issue. Few excerpts from discussion in #21
@jricher
Instead of dynamic registration as in the current prototype, we should be using static registration or its equivalent to introduce a client to the AS. With the current prototype setup, registration is required to set up a client_id and associate it with the application keys. The downside is that for ephemeral applications, and even native applications that can get deactivated/uninstalled/abandoned, this leaves a lot of dangling registrations at an AS that will never be seen again. The client ID in OAuth2 allows an instance of software to be identified across multiple authorization requests, but it’s rare that a single application instance would ever ask for a second token. I believe that we can use technologies like PKCE and DPoP to fill in the functionality currently provided by DynReg. Coupled with this, we can use a WebID for the client ID, or use it to fetch/validate a client ID, and tie that to a set of display and key information for a client. Client IDs are public information, and any attacker could claim any client ID, but we can use WebID mechanisms to lock down the behavior of a given client ID such that any attacker would need to also have control over the appropriate URLs for an app.
@zenomt
regarding Dynamic Registration: today, dynamic registration is used only between a client (such as a single-page browser app (SPA)) and the user's OpenID Provider. since the user is likely to be the same from use-to-use of the same SPA in the same browser on the same device, the SPA can (and many do) remember the client ID & secret (and other aspects of the openid-configuration) from run-to-run in that browser on that device. and as we discussed on the call this morning, my proposal doesn't require a dynamic registration with the Resource Server's Authorization Server either.
@jricher
DynReg is needed in WebID-OIDC because the keying material used by the client is associated with the client's registration, and not with the access token itself. This doesn't make sense for ephemeral keys and in-browser clients. This is in fact where DPoP comes into play: the keys used by DPoP do not need to be pre-registered with the client, removing the need to have each instance of the client dynamically register itself with the target AS.
@zenomt
i'm not sure what you mean by "target AS". in the current Solid POPToken scheme and in my proposal, dynamic registration is only done between the client app and the user's trusted OpenID Provider.
@dmitrizagidulin
Currently, the WebID-OIDC protocol does not use the Dynamic Registration to register a client app's keys.
An ephemeral key pair for the client app is generated for each session, and is sent over to the IdP during the Authorization Request step (and returned in the ID Token). (These are the keys that are used for the PoP token etc).So, currently, we're only using DynReg for a (throwaway) client_id, that's about it.
Then conversation goes little further but I think that last sentence from @dmitrizagidulin captures the current state of things.
I also see in #6
@jaxoncreed
The current OIDC use in Solid required dynamic registration which, for the Solid use case, is an unneeded step. https://oauth.xyz/ might be a better fit
I think it would be useful if that suggestion relates to misunderstanding about registering client keys during Dynamic Registration or OAuth.xyz addresses something related to (currently undefined) requirements for client registration.
Last but not least @zenomt mentioned couple of times Stateless Client Registration. We could also document what role we see it can play.
This issue also seems directly related to authenticating clients #25 and identifiers for clients solid/authorization-panel#30
@dmitrizagidulin
An ephemeral key pair for the client app is generated for each session, and is sent over to the IdP during the Authorization Request step (and returned in the ID Token). (These are the keys that are used for the PoP token etc).
In #25 I mention advantage of storing non-extractably CryptoKey compared to storing plain text client_id
and secret
. I understand that in WebID-OIDC you intended key pair to be generated by client each session. Did you intend client to store client_id
and client_secret
across those sessions, where it uses different key pairs? If true this aspect seems missing in your summary.
So, currently, we're only using DynReg for a (throwaway) client_id, that's about it.
I recall during one of the calls you mentioned NSS issuing both client_id
and client_secret
, which then can get used with refresh_token
grant. At the same time I notice lack of client_secret
in snippet from https://github.com/solid/webid-oidc-spec/blob/master/application-user-workflow.md#11-returns-successful-registration
The current WebID-OIDC protocol definition does not mention PKCE as a mechanism for securing authentication tokens. There are good reasons why this should be used in the interaction between a client (relying party) and identity provider (it helps to avoid replay attacks in the token request interaction).
Should this mechanism be required by the WebID-OIDC specification?
I would like to clarify how, when and with who clients have to authenticate. OAuth provides couple of mechanisms for Client to Authenticate with Authorization Server. Also for Sender Constrained (bound) tokens, Client proves possession of private key to Resource Serve, which we can also consider as a from of authenticating (possibly relying on different way to identifying the client solid/authorization-panel#30)
During Authorization Code flow we rely on redirect_uri
discussed in #22 and we also should document in one place how we rely on it for identifying and authenticating clients.
I think we also need to clarify client authentication when using refresh tokens, especially if we consider issuing client_secret
and expecting client to authenticate using client_id
& client_secret
. For fully in browser applications, non extractable CryptoKey provides more secure way of keeping secret than plain text client_secret
and since we use Client Constrained (bound) tokens we require clients generate private key anyways. We could use Assertion Framework for OAuth 2.0 Client Authentication and Authorization Grants for clients to also use their private key for authenticating with authorization server, in that case we could possibly rely on Client Credentials Grant and don't even need to use refresh tokens.
The current WebID-OIDC specification describes the use of Proof of Possession tokens, but many details are left out.
cnf
claim required?cnf
claim belong in the body or header of the JWT?key
field belong in the body or header of the ID token (or access token #65)cnf
claim at all? (i.e. a non-PoP token; effectively, a downgrade attack)"token_type": "pop"
claim requirement as in the examples?id_token
claim be renamed? (c.f. #65)Reference: https://tools.ietf.org/html/draft-fett-oauth-dpop-02
Issue 31: Accessing NonRDF-Sources directly via the browser brings up the question as to how a 401 returned by resources to browsers that don't have the capability to read wACLs, can then allow the user to get access to the resource.
For passwords the server can return a WWW-Authenticate
header, which will get the browser to open a request for passwords. For other authentication systems, a web page is usually returned, allowing the user then to enter an OpenID for example. But this last one would not be able to allow the user to take advantage of the ACL logic of his Launcher App, and so require the user to choose haphazardly one of OpenId providers, even if none of those fit the ACLs.
Q: What methods could help here?
It would be nice to see how this could be tied to a LauncherApp, so that further informed authentication decisions can be made by it. Could a browser plugin help?
@dlongley and @msporny's draft-cavage-http-signatures-05 has a few implementations, and is written in a style that has chances to get adoption by the IETF. It is generic enough to satisfy a wide set of use cases. And there is strength in numbers. This suggests that we use that as the basis for WebID-RSA ( though it could do with a better name see issue 5 ).
Of course we need to see if it works for us. That is what I have been working on:
What the SoLiD spec could suggest is a number of headers to use for authentication. Here is my first proposal. The SoLiD spec could suggest that client sign at least the following headers:
User
header for the WebID, when the user has one. This avoids an intermediary adding a WebID to the request and pointing the WebID profile to the WebKey document. ( Assuming the WebKey document does not point to the WebID for reasons of privacy )Signature-Date
(name open for discussion). This header is needed so that the message cannot be used in a replay attack - it forms a good nonce. The server could also verify that this date be within a couple of seconds at most of the actual Date
header sent by the browser. We are doing this over TLS, but these headers could end up in the log, and those logs could be stolen.(request-target)
seems like a good idea too. ( can that in some way be thought of as the nonce? )Host
The Authorization header would then look like this
Authorization: Signature keyId="https://joe.example/keys/key#",algorithm="rsa-sha256",\
headers="(request-target) host user signature-date",\
signature="Base64(RSA-SHA256(signing string))"
note:
\
above indicate that it and the newline character that follow are just for display purposesBase64(RSA-SHA256(signing string))
is a function on a signing string specified below which would(request-target): get /profile\n
host: jane.name\n
user: https://joe.example/#
signature-date: 2015-11-10T17:39:25.192Z\n
What SoLiD adds to Http-Signature then is the definition of Signature-Date
which gives the time of the Signature, with the requirement that it be no more than a few seconds out of sync with the real date, and the requirement of User
for the WebID when the user has one (which is already something SoLiD uses).
This would then also require the WebId Working group at some point to define WebID verification given a key and such a header (not difficult: just need to dereference the WebID and see if it points to the key I think).
The Initial SoLiD server's request that could have launched this would then have looked like this:
401 Unauthorized
WWW-Authenticate: Signature realm="/",headers="(request-target) host user signature-date"
Access-Control-Allow-Credentials: true
Access-Control-Allow-Headers: Authorization,Host,User,Signature-Date
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: WWW-Authenticate
The definition of the Signature-Date
header is the precise Time in ISO 8601 format w/ sub-millisecond precision when the header was signed. You can get this in JS with
new Date().toISOString()
This would need to be added to the IETF header registry referred to by section 8.1 of RFC 7230 on HTTP 1.1.
[1] The core verification code, without the HTTP header setting:
Some of the interactions defined in the WebID-OIDC protocol make it more similar to OAuth2 than a classic OIDC interaction. It may make sense to generalize the protocol, aligning it with OAuth2. If so, it should be decided which formalisms from OIDC would be carried over (e.g. endpoint discovery via /.well-known/openid-configuration
)
https://github.com/solid/authentication-panel/blob/master/oidc-authentication.md#resource-access
Ephemeral clients MUST use DPoP-bound Access Tokens, however, the RS MAY allow registered clients to access resources using a traditional Bearer tokens.
I don't see how RS would know if client registered with OP or not. If don't make DynReg optional for the client #56 than all clients will be actually registered. Even if client is registered with OP I don't think this on its own enables it to use Bearer tokens.
according to discussion in #1, there is currently no support in solid auth clients for multiple realms/protection spaces per origin. the current POP token construction implies that the same token can be used in any protection space at an origin.
addressing this can be done entirely on the client side today, by paying attention to the realm
parameter of the WWW-Authenticate
response header in a 401
, and taking care to differentiate and track by realm
if an access token is rejected for some reason (for example, if it was revoked in one protection space).
it would also be handy if access tokens for different protection spaces had to be different, for example by doing #3 or by obtaining an access token from an authorization server instead of making one in the client.
at the very least, multiple realms per origin should not be prohibited, and documentation should acknowledge that it is a valid case in HTTP and clients should take care.
consider the discussion beginning at #1 (comment) to be incorporated by reference as though fully set forth in this issue.
While #25 focuses on authenticating Clients, here we can focus on authenticating Users. Preferably we can start following up with PRs for specific findings. In #2 we have some relevant points as well, I think we can extract ones related to user authentication here and close #2 which served as meeting notes.
I can prepare PR for next Monday formulating above as group finding.
As I understand identifying Apps indirectly via the Origin
header is now no no longer correct, and one should use the redirect_url of OAuth as an identifier for the app. It is clear that Origins are much too broad to identify a single application.
I am not sure though how the OAuth Authorization server authenticates the app? How does the OAuth server distinguish one app from an origin from another?
I might be wrong about this, but wouldn't recommending the caching of public keys as stated here:
It is RECOMMENDED that the RS keep a list of trusted IdPs, to facilitate the expedient lookup of JWKS through local trust stores or cached public keys.
Be an anti-pattern as it would be recommended that IdPs rotate their keys frequently?
as i mentioned on the mailing list on july 12, i believe there are substantial and significant security issues with the currently implemented (with documentation waiting to be merged) Proof-of-Possession "bearer token" authorization scheme. the most important issues are:
also, these proof tokens are necessarily big and must be passed around on every request. HTTP/2 header compression might mitigate that somewhat over the network between a client and a server (or at least a reverse proxy or application gateway), but at some point in the processing chain a multi-K blob of bytes needs to be processed on every request.
some time ago i proposed an alternative authorization method that addresses the above concerns.
Current draft states:
The encoded HTTP response and resource URL of the DPoP token MUST be dereferenced and matched against the original request in the Access Token.
I fail to understand what exact steps it requires. Does resource URL of the DPoP token refer to htu claim in DPoP Proof? What does original request in the Access Token mean?
Possibly strictly using terminology
And specific JWT claims could help to eliminate any confusion arising.
in the current POP token scheme, the client directly manufactures an access token to present to the server with the "Bearer
" method. as discussed on the 2019-08-26 and other calls and in #1, i believe there are numerous problems with the current POP token scheme.
several issues raised in #1 can at least be partially addressed by approaches discussed in #3, #9, #10. a summary of the most important remaining problems specifically with the client making access tokens:
obtaining an access token from the resource server's authorization server instead of using an access token made by the client addresses the above and has the following benefits:
iat
or serial number or something. techniques like Bloom filters could be used in very high-volume deployments.401
response, could provide an API endpoint to obtain an access token that is otherwise compatible with the server's existing Bearer
-based authorization infrastructure.Bearer
" model, which might make security people happier.with the following cost:
Public Clients. Instead of dynamic registration as in the current prototype, we should be using static registration or its equivalent to introduce a client to the AS. With the current prototype setup, registration is required to set up a client_id and associate it with the application keys. The downside is that for ephemeral applications, and even native applications that can get deactivated/uninstalled/abandoned, this leaves a lot of dangling registrations at an AS that will never be seen again. The client ID in OAuth2 allows an instance of software to be identified across multiple authorization requests, but it’s rare that a single application instance would ever ask for a second token. I believe that we can use technologies like PKCE and DPoP to fill in the functionality currently provided by DynReg. Coupled with this, we can use a WebID for the client ID, or use it to fetch/validate a client ID, and tie that to a set of display and key information for a client. Client IDs are public information, and any attacker could claim any client ID, but we can use WebID mechanisms to lock down the behavior of a given client ID such that any attacker would need to also have control over the appropriate URLs for an app.
DPoP with ephemeral keys. Instead of the keys being registered in one-time-use client entries, DPoP allows a client to present an ephemeral key at token request time. These keys don’t have to be strongly associated to a client’s identity, but they tie a token request to subsequent token presentations. This prevents a token from being replayed by another party, such as a downstream RS. This key binding lasts as long as the token.
JWT Profile. In addition to the key presentation mechanism of DPoP, we need a way for the RS to figure out who the token’s for and what software is presenting it. We can define JWT claims for the user and client software, and define in the Solid profile that these are WebID’s and how these are to be dereferenced. This would allow the RS to validate the token’s presentation mechanism, which binds to the client, and the token’s own signature, which binds to the AS, and the WebID’s for the user and client, which validate those parties and allow authorization decisions. This JWT access token would replace the use of the ID Token in the current process (and therefore no longer depend on OIDC either), and would need to comply with the various BCP’s for JWTs that are floating around. 1 2 3
Structured Scopes. Proposed work by Torsten Lodderstedt. This allows us to define the kinds of information being requested and authorized by the protocol, and as Dmitry pointed out, the current thinking in the OAuth world aligns with the WAC concepts in Solid.
Alignment with existing infrastructure would take the form of an identity broker (for users on their way in) or an application gateway (for APIs on the way out) to bridge between the Solid ecosystem and traditional OAuth/OIDC/SAML systems.
And finally, much of this approach aligns with ongoing work in the XYZ Project, which is seeking to patch a lot of the problems in the OAuth protocol and its extensions.
At the moment, the Solid ecosystem depends on an authentication token structure that was designed before the relevant draft specification was released by the IETF. In November, work began to replace this token.
The following outlines a proposed approach to move the ecosystem towards a more secure token while easing the migration for the affected existing components.
The need to change the token was originally identified by Justin Richer and the new token is a central part of the current work on the Authentication specification. Given there was no official 1.0 spec for authentication, it was decided to make one with the new token at the center.
In parallel, work was undertaken to use the new token in developer libraries.
The following are the proposed success criteria for an upgrade plan:
The process of writing a normative spec for approval by the authX panel is ongoing. While it is not explicitly required to begin implementations, it is required to confirm that implementations are conformant to the specification.
Solid-Auth-Fetcher will replace Solid-Auth-Client as the primary auth client library in the solid ecosystem. While Solid-Auth-Client was designed to operate in the web browser, Solid-Auth-Fetcher is designed to operate in many environments with different interfaces. This will add some complexity to an application developer’s upgrade process.
To ease the transition, a wrapper for Solid-Auth-Fetcher with the same interface as Solid-Auth-Client will be provided. The wrapper should be made available on NPM as ‘solid-auth-client 2.0’ with a security warning to upgrade.
Solid-Auth-Client 2.0 should clear sessions that use the old token.
The first release of Solid-Auth-Fetcher should include server and browser compatibility. It should be compatible with the new token but should recognize when it is communicating with a server that only supports the old token and fail elegantly.
It would make sense to release Solid-Auth-Fetcher at the same time as solid-auth-client 2.0.
NSS must also be upgraded to work with the new token. The upgrade must use an identity provider that can issue both old and new tokens, and a storage server that can accept both old and new tokens. Server maintainers should be able to enable and disable support for each token type in the configuration.
inrupt.net and solid.community should be upgraded to the new version of NSS. The deployment should be configured to support both the old and new tokens for an agreed period, after which it should be changed to only work with the new token. This gives application developers time to upgrade their applications. The proposed duration for support of both tokens is 90 days.
Solid-Auth-Cli was the counterpart to Solid-Auth-Client that focused on login from the server. The interface for Solid-Auth-Cli is bad practice as it invites the developer to pass the username and password for the user. Therefore, Solid-Auth-Cli should be deprecated and replaced by Solid-Auth-Fetcher.
There are multiple libraries that depend on Solid-Auth-Client or Solid-Auth-Cli. An incomplete list is as follows:
Those depending on Solid-Auth-Client should be upgraded to use Solid-Auth-Fetcher or Solid-Auth-Client 2.0. Those depending on Solid-Auth-Cli should use Solid-Auth-Fetcher.
Dependencies on Solid-Auth-Client and Solid-Auth-Cli should be deprecated. (seen below):
As this is a significant change, the approach should be approved by the auth panel initially, then a communications plan should be created and approved before communicating with the community.
The diagram below reflects the current recommended approach for rolling out the auth upgrade to the Solid ecosystem:
https://www.lucidchart.com/invitations/accept/1a670674-6786-4a8d-822c-27f307633de7
Looking at various specs and the many different ways in which to write this section, this one appeals to me the most - https://www.w3.org/TR/2019/REC-vc-data-model-20191119/#acknowledgements. I like the straightforwardness of it. Though I'm completely open to suggestions on how we format this section.
Can we start compiling a list of names to add?
This is based on an offline discussion between @acoburn, @dmitrizagidulin, and @jaxoncreed. TLDR, it's an authentication flow that does not require token wrapping or crypto on the client. Instead, it depends on making a round trip to the OP every time the client wants to query a new RS.
The current WebID-OIDC spec uses PoP with a wrapped id_token, to a large extent because this allows the RP to scope an audience claim to a particular RS (thus addressing the issue of token exfiltration). Per OIDC, the audience claim of an ID token is the RP itself, so this mechanism allows us to direct that audience claim to a particular RS instead.
But if we are changing from using ID tokens to access tokens, we potentially have much more flexibility around the structure of those JWTs. What if, in addition to the standard ID token returned from the /token endpoint of an AS, the response contained an access token with this structure:
{
"kid" : "AS key id"
}
.
{
"sub" : "WebID of agent" ,
"iss" : "AS/IdP identifier" ,
"aud" : "RS/Pod identifier" ,
"azp" : "RP/App identifier" ,
"iat" : "..." ,
"exp" : "..."
}
.
(AS-based-signature the above)
This would vastly simplify the token structure that an RS needs to validate and it eliminates the need for RPs to sign the outer envelope token. The downside is that, for each RS that a given RP interacts with, that RP would need an extra round-trip with the AS: either via the /token endpoint or some other endpoint where the RP could exchange an ID token for a signed, RS-scoped access token.
In DPoP Validation, checking the iat
field is mentioned, but checking the htu
and htm
fields are not. I know that both are referenced in the DPoP spec, but it seems weird the iat
, which is also mentioned is brought over into this spec, when the two arguably more important fields in DPoP are left unmentioned.
Currently the spec states:
The client presents its WebID to the IdP and requests an Authorization Code.
But there is no requirement in the spec for IdPs to implement a mechanism that confirms the application's possession of that WebId. This is important because if an app can claim to be any WebId, it will mess up the access control system down the line.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.