mickeyn / ponapi Goto Github PK
View Code? Open in Web Editor NEWa Perl client/server implementation of {json:api} v1.0
a Perl client/server implementation of {json:api} v1.0
We want to be able to package the PONAPI::Client and PONAPI::Server separately
As discussed before, and following a request from @moltar, we should consider adding a DBIC
support layer.
I think we need a PONAPI::Repository::DBIC
consuming PONAPI::Repository
and implementing the required methods in DBIC
-speak.
PRs are welcome.
http://discuss.jsonapi.org/t/introducing-ponapi-a-perl-implementation-of-json-api/268/4
if this is done in the mock repository - we can take some logic out of it into the DAO:
we can probably filter the included list based on relations before calling the repository action.
Review the spec and create a series of tests that help keep us in line with the spec.
GET /articles/2/authors
responds with self => 'people/88'
; similarly, GET /articles/2/comments
responds with self => '/comments'
.
This is because PONAPI::Builder::Document::get_self_link uses the first resource builder the build the self link
in document.errors we MAY (according to spec) include the HTTP status per error entry (we can have one but the response can only have one)
should we automatically do it or do you think this is redundant?
We are passing $doc to the repository object and expect it to raise the error (reasonable) but also set the error (which is a spec thing and we lose control of).
I think we should provide (probably in PONAPI::DAO::Repository) a generic way to raise errors where we ensure the status codes (and maybe other info pieces) based on a reported scenario and according to the spec.
I see that there is a wrapper for YAHC UA which is able to do HTTPS queries. However, I can't find no way to replace Hijk with YAHC, and pass the SSL parameters to it.
Client and Server should include PONAPI version information in Request and Response.
This can be either in HTTP headers (X-PONAPI-Server-Version
, X-PONAPI-Client-Version
) or just an additional key in a meta
info.
$ curl http://localhost:5000/artists
{"jsonapi":{"version":"1.0"},"errors":[{"detail":"{JSON:API} Missing Content-Type header","status":415}]}
This is a GET request and has no body, so a Content-Type header should not be required.
https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.17
"The Content-Type entity-header field indicates the media type of the entity-body sent to the recipient..."
I'm trying to use PONAPI in conjunction with ember-data which doesn't send a Content-Type for GET requests. I can configure the adapter to do so but it would be nice if it worked out of the box.
The filters
should support some set of standard operators (filter[id:gt]=1
filter on IDs greater then 1) and passing this down to the DAO layer to do whatever makes sense for it.
we can do without some of the modules we have in the dependency list - let's clean the list a bit by removing them.
So, me and @stevan decided to keep the restriction out of the current implementation since it's messing up the code.
We're leaving this issue open for discussion in the future if this comes up as a real problem.
Hey, this is cool! It looks very similar to OData (http://www.odata.org/). Maybe we can re-use the code for an OData client.
one should be able to ENABLE sorting per attribute:
why?
The data will be provided mostly by a database which may have indices on some fields (attributes) ie server sorting of these fields would be cheap. Sorting of other fields wouldn't be done on the serverside, so the client shouldn't be able to request it
cheers
Mark
We discussed this, the errors put into the document can either be a string, or then can be a HASH ref with a single key "code" which contains a string which can be fed into an i18n mechanism (Locale::Maketext, etc) and properly i18n these errors using some kind of plugin.
Or, we could do it totally different.
we see links to '//' when data is empty
Lets be sure that Dancer is actually helping us and not hurting us.
http://jsonapi.org/format/#fetching-pagination
Starting with this request:
GET /retrieve/articles?page[offset]=15000&page[limit]=5000
If the repo then wants to add pagination links, currently they have to construct the urls themselves. I'm thinking we should add something to make that simpler; maybe as simple as passing a hash to ->get_self_link() that will turn those into a url.
Side note: This code might need to be shared between client and server.
Need to review the spec and look for issues that might have slipped through the cracks.
See $title. We need to pass it to the DAO, the request object, the document builder, and maybe to the repo too.
$subject
Plack::Middleware executes (mostly) outside of an application, for some cases this will be fine for PONAPI, but not all of them. It might be worth looking into how we might create a plugin system specific to PONAPI (with API hook points, etc). Specifically things like user/session based rate-limiting, non-generic profiling/instrumentation, application specific load balancing, and probably a few other things I can't think of right now.
Probably best to find an actual use case and start from there, but I just wanted to make a note before this slipped my mind.
The POD in PONAPI::Document shows the second argument to raise_error() being a scalar - it should be a hashref. PR to follow.
This code is an evolution of a hack, it needs to be redone in a sensible way.
Do we want them? Do we care?
This works:
perl -Ilib -MPONAPI::Client -MData::Dumper -E 'my $p = PONAPI::Client->new(); warn Dumper( $p->retrieve(type => "articles", id => 2, include => [qw/comments/]) )'
From my understanding of the spec (http://jsonapi.org/format/#fetching-includes) this should give me a 400:
perl -Ilib -MPONAPI::Client -MData::Dumper -E 'my $p = PONAPI::Client->new(); warn Dumper( $p->retrieve(type => "articles", id => 2, include => [qw/asdasd comments.not_there/]) )'
Instead, it gives me the same result as if I hadn't used include at all.
http://jsonapi.org/format/#crud-updating-responses
Spec sez:
200 OK
If a server accepts an update but also changes the resource(s) in ways other than those specified by the request (for example, updating the updated-at attribute or a computed sha), it MUST return a 200 OK response. The response document MUST include a representation of the updated resource(s) as if a GET request was made to the request URL.
That presents a handful of complications depending on how the underlaying repo is implement, e.g. replication delay.
We could solve this by having each resource say whether we need to wait in case of a 200, or if we can return immediately; or a server side configuration to turn all update-successful 200s into 202s, since the spec also says:
202 Accepted
If an update request has been accepted for processing, but the processing has not been completed by the time the server responds, the server MUST return a 202 Accepted status code.
Or something else? :)
Currently, we let the underlaying repo handle most of the HTTP status codes; This means that they end up calling $doc->set_status($code).
This means the DAO has no way of checking if the repo is conforming to the spec, and whoever writes a repo needs to know the spec to set the appropriate codes.
Might be worth trying adding constants or convenience methods to the document, ala
$doc->deferred_processing;
$doc->no_content;
$doc->used_client_provided_id; # alias to ->no_content
Even though 'Perl Object Notation' is a made-up thing, we liked how it sounds; so we kept it.
We need a utility for DAO/repository to generate links.
This will help both the DAO (location header in 'create') and repository implementation.
We needs it!
I see the examples in the extensions sections sending headers that should produce errors according to the spec.
see: http://jsonapi.org/format/#content-negotiation-servers
vs. http://jsonapi.org/extensions/bulk/#creating-multiple-resources
need to investigate this / open an issue with {json:api}
Should we be able to handle Binary Data in the POST and PATCH actions, and if so, what should it actually do.
Arguments to sort
and include
only apply to the primary resource being requested. This means that there's no limit to the amount of relationships (and worse, included resources) one can get back.
Perhaps we should allow specifying what they are sorting, by specifying the type:
GET /articles?sort=-id
vs
GET /articles?sort[articles]=-id
This would allow requests like this:
GET /articles?sort[authors]=-id
# Sort the authors relationships by id
GET /articles?include=authors,comments&sort[include]=-people.id
# Sort the included array by the people's id.
I think we could modify sort
that way fairly easily. page
could follow a similar pattern:
GET /articles?page[number]=5
vs
GET /articles?page[articles][number]=5
The problem here is when trying to page relationships or includes; there's no way to do this without breaking the spec, because of this:
http://jsonapi.org/format/#document-top-level
Primary data MUST be either:
a single resource object, a single resource identifier object, or null, for requests that target single resources
an array of resource objects, an array of resource identifier objects, or an empty array ([]), for requests that target resource collections
There is no place for us to place a links member with the pagination links!
Same situation with include
-- spec says to return a collection, so we can't put the pagination in there.
Any ideas? Should we file this one under potential future extensions?
The "repository" noun appears to be the most critical concept in the entire implementation, however, I am unable to find a sentence in the documentation that defines the repository at any level or references a definition. Could we add one please?
there's a schema file published (linked in the FAQ): http://jsonapi.org/schema.
let's run a check to see we are fully compatible with it.
Look at Test::TCP to solve this, you can look here (https://metacpan.org/source/MIYAGAWA/Plack-1.0037/lib/Plack/Test/Server.pm) to see how Plack does it.
add conf. file + read it from the server code (remove the constants)
Just saw this module ...
https://metacpan.org/pod/JSON::MergePatch
... and thought it might be useful if we were ever to support the idea of "patching" from a different perspective.
We have a need to handle the PONAPI specific URI forms (ex: "foo[bar]=1", "baz=1,2,3,4", etc). We have a need to both read and write these URIs.
The server will need to write these URIs when building link
sections, it also needs to read them when they come in via HTTP requests.
The client will need to write these URIs when building a HTTP request, and also to read them when they are embedded into a links
section of a document.
Quick disclaimer: I don't think we need to tackle this for the dev release, but I think it might be crucial for the CPAN release.
Creating even a very simplistic repository wrapping a SQL table can take significant amounts of time, and requires in-depth knowledge of PONAPI -- or at least, copypasting from another repo and cargo-culting code.
For example, this is a simple retrieve_all implementation:
sub retrieve_all {
my ($self, %args) = @_;
my $type = $args{type};
my $document = $args{document};
my $dbh = $self->dbh;
my $sth = $dbh->prepare(qq{ SELECT * FROM $type });
$sth->execute;
while ( my $row = $sth->fetchrow_hashref ) {
my $id = delete $row->{id};
my $resource = $doc->add_resource( type => $type, id => $id );
$resource->add_attribute( $_ => $row->{$_} ) for keys %{$row};
$resource->add_self_link();
}
return;
}
Which seems sane enough, until you notice that it's missing a ton of features. If we want to also return the resource's relationships, we need at the bare minimum:
my %relationships => (
articles => {
comments => {
type => 'comments',
one_to_many => 1,
rel_table => 'rel_articles_comments',
id_column => 'id_articles',
rel_id_column => 'id_comments',
},
authors => ...,
},
...
);
sub _fetch_relationships {
my ($self, %args) = @_;
my ($type, $id) = @args{qw/ type id /};
my $type_relationships = $relationships{$type} || {};
return unless %$type_relationships;
my $dbh = $self->dbh;
my %rels;
foreach my $rel_type ( keys %$type_relationships ) {
my $relationship_info = $type_relationships->{$rel_type};
my $table = $relationship_info->{rel_table};
my $id_col = $relationship_info->{id_column};
my $rel_id_col = $relationship_info->{rel_id_column};
my $sth = $dbh->prepare(
"SELECT $rel_id_col FROM $table WHERE $id_col = ?"
);
return unless $sth->execute($id);
while ( my $row = $sth->fetchrow_hashref ) {
push @{ $rels{$rel_type} ||= []}, $row->{$rel_id_col};
}
}
return \%rels;
}
sub _add_relationships {
my ($self, %args) = @_;
my $resource = $args{resource};
my $all_relationships = $self->_fetch_relationships(%args);
foreach my $rel_type ( keys %$all_relationships ) {
my $relationships = $all_relationships->{$rel_type} || [];
next unless @$relationships;
my $one_to_many = $self->has_one_to_many_relationship($type, $rel_type);
foreach my $rel_id ( @$relationships ) {
$resource->add_relationship( $rel_type, $rel_id, $one_to_many )
->add_self_link
->add_related_link;
}
}
}
sub retrieve_all {
...
# Inside the while loop:
$self->_add_relationships(
%args,
resource => $resource,
);
...
}
Add to that code for include
, fields
, page
, sort
, filter
, and the complexity quickly gets out of hand, not to mention the amount of places to forget some bit of code (like adding the self links).
There must be a better way!
My current thought is to add two new roles for the common SQL case, PONAPI::DAO::Repository::Database
and PONAPI::DAO::Repository::Table
.
::Database
handles a collection of tables, similar to what our current MockDB does.
It would implement all the methods required by PONAPI::DAO::Repository
( has_type, retrieve, etc ) and handle the building of the document, but it delegates the fetching/handling of the data to smaller methods provided by the ::Table
-composed clases.
So its retrieve_all
might look something like this:
sub retrieve_all {
my ($self, %args) = @_;
my $type = $args{type};
my $document = $args{document};
my $table_obj = $self->table_object_for_type($type);
my $rows = $table_obj->search_where(
columns => $self->fields_to_columns(%args),
where => $self->filter_to_where(%args),
limit => $self->page_to_limit(%args),
# etc
);
my @includes;
foreach my $row ( @$rows ) {
my $relationships = $table_obj->fetch_relationships(%args, ...);
my $resource = $document->add_resource(...)->add_attributes(...)->add_self_link(...);
$resource->add_relationships(...);
push @includes, $relationships if ...;
}
# etc etc
}
Meanwhile, the class consuming the role will just need to provide a handful of methods to implement has_type
and friends; the class consuming the ::Table
role would likewise just need a handful of lines of meta info (what the columns are, where the relationships are).
I'm not particularly attached to the specifics of this idea, but if we can spare people from using the low-level ::Repository
and implementing everything on their own, I think that would be a huge gain, both in sanity and in our ability to ensure spec-conformant responses.
example: if you want to get just the resources without the relationships,links,etc.
I didn't find anything in the spec, but we can easily implement such controls (using the filter param?)
Write tests to make sure Server works
We currently don't, but some of them are really useful, like bulk create.
GET 'http://0:5000/people?include=articles'
...
"links" : {
"self" : "/people"
},
...
need to fix and add a test.
Add more tests
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.