Comments (13)
@dlongley and @msporny seem to favor Signature-Date
over Signature-Time
. I am ok with that.
I'd just point out two small issues that could favour Signature-Time
:
- the time is meant to be millisecond precision time, so we're really emphasizing the time aspect
- the format is different from the
Date
header: this one is ISO8601 format Signature-DateTime
would be precise, but it's really a bit long for a header
What does the larger community think? For the moment Signature-Date
is winning 2 to 1 and so I have changed my code and the above explanation to use that.
from authentication-panel.
I have now implemented client and server parts of the protocol in Scala and Scala-JS respectively and have gotten it to work. I have verified that I can display pages served from localhost that then fetch and intercept the 401 from https://joe.example:8443/ and make a new request that succeeds. Currently the request fails again because the access control rules on the server don't know how to give rights to a WebKey identified user, which is on my list todo next. This works in Chrome Canary and Firefox developer edition. I have not had time to test it more widely.
The current server code is here:
The core specification implementation part:
- HttpAuthentication - parses the
Authorize: Signature...
header - WebKeyVerifier - verifies the WebKey by fetching the public key at the WebKey URL.
- HttpSignaturesTest - tests that the above works correctly.
The web facing server layer
- auth function: the authentication and authorization function, that uses Principals stored in the session to avoid unecessary authentication.
- WACAuthZ: The access control part for WebKeys and verification of WebID. Perhaps setting information for where the client can publish his key ( using a
Link
header presumably ) - ReadWriteWebController - sets the CORS headers - ( the code there is repetitive and needs to be improved ). Notice how one needs to return a 200 for the OPTIONS ( or I think browsers won't see them )
The client:
- KeyStore: creates a key, saves it and fetches it from IndexDB
- WebResourceActor - an actor then uses the Fetch api to fetch keys, and signs the http headers if a 401 with the
WWW-Authenticate: Signature...
header is found. - todo:
- publish the generated key - currently this has to be done by hand, by copying information from the console to a file. The key URL is also hard coded.
- create user interface for managing the key and tying it to a WebID
- verify that the client from localhost can authenticate to joe.example and get a link to jane.example and authenticate there too with the same key. That is recursively the same as what we have already done, so there should be no problem there.
from authentication-panel.
Here is my experience using the Fetch API from https://joe.example
to https://jane.example
web crypto is that it requires 3 requests:
- the browser first makes a
GET
call which returns the401
with CORS headers.
It is good news that the first call is actually aGET
and not anOPTIONS
as this would lead one to think that for following calls if the client submits the correct cookie, this will be the only needed call. Also it suggests that perhaps the client can in future sessions make calls to the same server by immediatly starting off with anAuthorization:
header, so avoiding this following steps.
- the browser then makes an
OPTIONS
call which returns a200
with the same CORS headers
This is weird. Should it not just use the CORS headers from the 401
response in the previous step?
- the JS API intercepts the initial
401
call made by the browser'sGET
in 1) and after adding aAuthorize: Signature...
header makes a new call that should return a 200 ( in this case it still returns a
401 because the server is not set up properly ).
The server could then set a cookie. And from then on for that server each request would only be the OPTIONS
call followed by the actual call.
But if so who of the browser or the application should manage the cookie?
The only weird thing here is the additional OPTIONS call, which does not seem necessary in the case
of a GET
which returns the correct headers. ( Perhaps the server does not? )
The only advantage of a CORS proxy would be that the proxy could act as a cache for remote resources, and provide a consistent interface for resources, such as a SEARCH
method on resources, pre-fetching of resources, until those types of functionalities are more widely available.
from authentication-panel.
You can cache the preflight on a per-resource basis. Caching it for an entire site might happen someday, but is not possible today.
from authentication-panel.
Ok: once authentication happens and cookies are set then one only needs 1 connection for GET
requests. This is very good news.
Still this means that the browsers on the first unauthenticated request arguably make one connection too many: the OPTIONS
that follows the initial 401
is not necessary if the original 401
returns the correct CORS headers. ( see the previous snapshots )
from authentication-panel.
On to cookies. It is easy to have the server set a signed cookie for the WebKey. The client needs to ask for credentials
in the request, as shown in the following scala-js code
val requestInit = literal(
headers = literal( "Accept" -> rdfMimeTypes ),
requestCache = RequestCache.reload,
credentials = RequestCredentials.include //<- does not work if server's Access-Control-Allow-Origin is set to *
).asInstanceOf[js.Dictionary[js.Any]]
val request = new HttpRequest(proxiedURL.toString, requestInit)
The server needs to also make sure the Access-Control-Allow-Origin
header is set to the origin, or the JS will throw an exception:
The question is now wether it is actually a good idea to allow JS apps to use the normal cookie mechanism. What are the dangers? This would then allow any JS to act on the LDP resources. It may be better if the user were to allow JS from different origins at a time. This could be done by the server setting cookies for each origin with a Set-Origin-Cookie
and the browser JS adding an Origin-Cookie
header, that would act exactly like a Cookies, but these would be under full control of the origin, which could store them in IndexDB, or local storage. The server's access control rules could then allow access to certain resource for any key allowed by the user - ( WebID authentication over HTTP Signature - which would work like WebID-RSA ) - and some resources only to some keys, and others to the browser itself.
from authentication-panel.
@bblfish - what do you see are the advantages of HTTP-Signature over the proposed WebID-RSA mechanism?
from authentication-panel.
thanks @dmitrizagidulin for the question.
- The main advantage is that HTTP-Signature is already in RFC format, and has the support of players such as Oracle, Amazon, Digital Bazaar (@msporny), and others... and is going through an RFC process already. So we would have to do all that work to end up in exactly the same place by specifying this ourselves. And we have to do that work if we are going to have any chance of being taken seriously. It's one less thing people can criticise us with.
- HTTP-Signature has a few more features than WebID-RSA, which will allow us to answer criticisms more easily.
- It allows one to sign any number of headers, it's easier to fix things if necessary without breaking other implementations. And we don't quite know what's out there on the web.
- Also if someone discovers a problem with RSA - some backdoor nobody knew about - then it would be easy to switch with minimal change to a new crypto algorithm.
- There is no reason we should not in SoLiD specify a subset of HTTP-Signature as the one we require implementations to understand. So for example we can specify that we expect SoLiD implementations initially to only implement the RSA algorithm. We can do this to make it easier for people to get the basics going, and to make testing simpler.
from authentication-panel.
+1
On Fri, 5 Feb 2016 8:58 PM Henry Story [email protected] wrote:
thanks @dmitrizagidulin https://github.com/dmitrizagidulin for the
question.
The main advantage is that HTTP-Signature is already in RFC format
https://tools.ietf.org/html/draft-cavage-http-signatures-05, and has
the support of players such as Oracle, Amazon, Digital Bazaar (@msporny
https://github.com/msporny), and others... and has gone through an
RFC process already. So we would have to do all that work to end up in
exactly the same space, by specifying this ourselves. And we have to do
that work if we are going to have any chance of being taken seriously. It's
one less thing people can criticise us with.
2.HTTP-Signature has a few more features than WebID-RSA, which will
allow us to answer criticisms more easily.
It allows one to sign any number of headers, it's easier to fix
things if necessary without breaking other implementations. And we don't
quite know what's out there on the web.
- Also if someone discovers a problem with RSA - some backdoor
nobody knew about - then it would be easy to switch with minimal change to
a new crypto algorithm.
3.There is no reason we should not in SoLiD specify a subset of
HTTP-Signature as the one we require implementations to understand. So for
example we can specify that we expect SoLiD implementations initially to
only implement the RSA algorithm. We can do this to make it easier for
people to get the basics going, and to make testing simpler.—
Reply to this email directly or view it on GitHub
https://github.com/solid/solid-spec/issues/52#issuecomment-180278844.
from authentication-panel.
HTTP-Signature would be great, especially since TLS client certifiactes are somewhat problematic with HTTP/2.
from authentication-panel.
Moving this to solid/issues soon.
from authentication-panel.
The HTTP-Signatures spec has a github repo now and a list of implementations.
from authentication-panel.
Btw, we do have a Spec for using Signing HTTP Messages (developed at the IETF now) with Solid called HttpSig .
I have an implementation in Scala for the IETF Signing Http Messages 07 in the httpSig repo.
Currently it works with JVM based Akka.
I going to try to get it to work for http4s next so I can use it in the the browser with JS - and it could also be made to work on nodejs.
My EU finding is coming to an end, so if anyone has real needs for other implementations this is the best time to contact me. I think it should be possible to make releases even for Servlets... :-)
from authentication-panel.
Related Issues (20)
- lost contributions in move of HttpSig doc HOT 3
- keyId's do not exactly refer to keys anymore HOT 4
- HttpSig, Signature, or Solid?
- Ontology for the KeyId document HOT 8
- support did-jwt ? HOT 8
- Clarify the behaviour if/when multiple oidcRegistrations are present HOT 1
- On phishing with a WebID HOT 14
- Multiple WWW-Authenticate and Authorization headers
- can `keyid` really hold a URL? HOT 2
- Should Solid specify a syntax for Realm? HOT 1
- sending access control rules to the client in 401 body? HOT 8
- Solid-OIDC Conformance Discovery - not supporting Solid-OIDC MUST NOT provide a value HOT 2
- security vocabulary definitions
- Should Solid-OIDC mention RFC 8707 OAuth 2.0 Resource Indicators ? HOT 6
- Find ways of engaging more with the community HOT 6
- Document reference implementations and supported features HOT 2
- OIDC Registration required for OP? HOT 2
- OIDC primer: distinguish roles more clearly in text
- Require `Accept-Signature` header for server response
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from authentication-panel.