eclipse-ditto / ditto Goto Github PK
View Code? Open in Web Editor NEWEclipse Ditto™: Digital Twin framework of Eclipse IoT - main repository
Home Page: https://eclipse.dev/ditto/
License: Eclipse Public License 2.0
Eclipse Ditto™: Digital Twin framework of Eclipse IoT - main repository
Home Page: https://eclipse.dev/ditto/
License: Eclipse Public License 2.0
If a jwt issuer changes, policies using the old issuer are no longer working as intended because the authorization subjects provided in JwtAuthenticationDirective have the new issuer as a prefix which might differ from the one used when the policy was created.
Currently we only deliver change notifications via the WS or SSE for subjects which have unrestricted "READ" permission for changed Things (via the so-called "read-subjects").
For ACL that was and is sufficient.
But when using a Policy for a Thing in which I only have "READ" permission for a single Feature property I would expect to get a WS message if this property is changed (or also if the complete Feature/Thing is changed and the property was part of that change).
That is however a little difficult as we would have to build "views" of each emitted Event for each WS session.
As we do not want to instantiate the PolicyEnforcer for all used policies again in the Websocket sessions I suggest serializing this kind of information (which subject is allowed to read which parts of the Thing affected by an emitted ThingEvent) as header field.
The format of the header could look like:
{
"sub1": [
"/features/lamp1/properties/illuminance",
"/attributes"
],
"sub2": [
"/"
]
}
This format could be calculated by the PolicyEnforcerActor
for an incoming ModifyCommand
- and fill the header based on the actually changed Json fields inside.
The Websocket/SSE would "only" have to use a small interpreter for this format in order to return only a partial Event.
I encountered an Exception when changing API version of the request to modify a Thing (PUT /things/{thingId}
):
2018-01-29 07:26:46,057 ERROR [48c77585-47b9-470b-96e8-4f16a055f18a] o.e.d.s.t.u.a.ThingUpdater akka://ditto-cluster/system/sharding/search-updater/3/org.eclipse.ditto%3A8f63f01a-1fc0-4968-a2b1-9cf5ed9de08a - Thing to update in search index had neither a policyId nor an ACL: ImmutableThing [thingId=org.eclipse.ditto:8f63f01a-1fc0-4968-a2b1-9cf5ed9de08a, namespace=org.eclipse.ditto, acl=null, policyId=null, attributes={"a1":"v1"}, features=ImmutableFeatures [features=[ImmutableFeature [featureId=f1, properties={"p1":{"fpk1":"otherValue"}}]]], lifecycle=null, revision=2, modified=null]
I was able to trace this error back to inconsistent behavior between things and things-search, as things-search sometimes
To reproduce the problem, you can create a new Thing using API x and update the Thing using API y, which results in the following outcome:
API x (create) | API y (update) | Result in things | Result in things-search |
---|---|---|---|
1 | 1 | 👍 | 👍 |
1 | 2 | 👍 | ❌ |
1 | 2 (with ACL in body) | 👍 | ❌ |
1 | 2 (with policyId in body) | 👍 | 👍 |
2 | 1 | 👍 | ❌ |
2 | 1 (with ACL in body) | 👍 | 👍 |
2 | 2 | 👍 | ❌ |
2 | 2 (with ACL in body ) | 👍 | ❌ |
2 | 2 (with policyId in body) | 👍 | 👍 |
The implementation should be changed to result in a consistent and predictable behavior. Since policies in v2 offer way more configuration possibilities, it should also be forbidden to change back from policies to ACL. I would suggest the following outcome:
API x | API y | Result in things | Result in things-search |
---|---|---|---|
1 | 1 | 👍 | 👍 |
1 | 2 | ❌ | - |
1 | 2 (with ACL in body) | ❌ | - |
1 | 2 (with policyId in body) | 👍 | 👍 |
2 | 1 | 👍 (automatically adds policyId to emitted ThingModified Events) | 👍 |
2 | 1 (with ACL in body) | 👍 (removes ACL and automatically adds policyId to emitted ThingModified Events) | 👍 |
2 | 2 | 👍 (automatically adds policyId to emitted ThingModified Events) | 👍 |
2 | 2 (with ACL in body ) | ❌ | - |
2 | 2 (with policyId in body) | 👍 | 👍 |
Goal is to have another microservice running in the ditto-cluster which uses an AMQP 1.0 client library in order to connect to an Eclipse Hono instance (e.g. the sandbox running on hono.eclipse.org)
The README should give more details about using the library e. g. document idioms for creating a JSON object or using JsonFieldDefinitions etc.
As from the mail from 04.05.
requirements:
- All project web pages must include a footer that prominently links back to key pages, and a copyright notice. The following minimal set of links must also be included on the footer for all pages in the official project website:
a. Main Eclipse Foundation website (http://www.eclipse.org);
b Privacy policy (http://www.eclipse.org/legal/privacy.php);
c. Website terms of use (http://www.eclipse.org/legal/termsofuse.php);
d. Copyright agent (http://www.eclipse.org/legal/copyright.php); and
e. Legal (http://www.eclipse.org/legal).- Approved Eclipse logos are available on the Eclipse Logos and Artwork page: https://eclipse.org/artwork/
- A user must be requested to give their consent, and explicit consent must be given by the user before a project website can start using cookies. This requirement also includes cookies used by 3rd party services such as, but not limited to: Google Analytics, Google Tag Manager, and social media widgets.
- Project websites must not collect and/or store and/or display personal information.
- Project websites using 3rd party services such as, but not limited to, google analytics must be explicit about which company or companies have access to the data collected. For example, the project website must identify on their website the individuals or organizations who have access to google analytics data.
First step towards Eclipse Vorto integration:
Format:
"features": {
"my-lamp": {
"definition": [ "org.eclipse.example:Lamp:1.0.0" ],
"properties": {
....
}
}
}
It would be great to have the possibility to make ditto run in openshift/kubernetes. As fas as I can see docker-compose is used. It might makes sense to provide a yaml template for kubernetes/openshift like e.g. Kapua.
In order to add another important Digital Twin feature to Ditto it requires to distinguish between different state perspectives.
We already have the "twin" and "live" perspective being an important differentiation between "cached state" and "actual live state".
However if a user wants to configure on a Digital Twin the desired state (e.g. "I want that this light is switched on") no matter if it is currently receiving the desired change or not, Ditto needs a place to store that.
I would suggest that
I would also like to propose to have a "common API" which handles the "twin/live/desired" under the hood.
E.g. that a user may trigger API calls in which she defines:
true
(first trying via "live", falling back to "desired" after a timeout when live doesn't answer in time)A device may retrieve the state is has still to apply in order to be "in sync" with the Digital Twin.
E.g.:
What is your opinion on that?
Containing:
JWT based authentication with Google OAuth2.0 would be the best way to provide login.
It would be cool to have some kind of "landing" page containing:
application/json
null
attributesCurrently ditto uses version 0.9.3. The most recent version is 0.11.0. Since version 0.9.4 the plugin is marked with @threadSafe
which prevents the following warning during multi thread Maven build:
[WARNING] * Your build is requesting parallel execution, but project * [WARNING] * contains the following plugin(s) that have goals not marked * [WARNING] * as @threadSafe to support parallel building. * [WARNING] * While this /may/ work fine, please look for plugin updates * [WARNING] * and/or request plugins be made thread-safe. * [WARNING] * If reporting an issue, report it against the plugin in * [WARNING] * question, not against maven-core * [WARNING] ***************************************************************** [WARNING] The following plugins are not marked @threadSafe in Eclipse Ditto :: Signals :: Commands :: Batch: [WARNING] com.github.siom79.japicmp:japicmp-maven-plugin:0.9.3 [WARNING] Enable debug to see more precisely which goals are not marked @threadSafe. [WARNING] *****************************************************************
Currently in the ClusterUtil
Ditto implements the initial forming of a cluster by itself (based on DNS and only working in docker swarm).
The Akka Management library includes:
Use a CDN instead (if even possible):
documentation/src/main/resources/docson/lib/*
documentation/src/main/resources/docson/docson.js
documentation/src/main/resources/docson/docson-swagger.js
documentation/src/main/resources/docson/widget.js
Basically the "target" configuration for AMQP 1.0 endpoints should already work,
Hono however defines that the AMQP target address is in the form control/${tenant_id}/${device_id}
.
Therefore Ditto has to support dynamically determining the device_id
which should be placed inside the address.
E.g. by applying some kind of "variable" replacement where following things could be referenced:
When currently subscribing for Events/change notifications via
the consumer always gets all change notifications it is allowed to see.
On SSE this can be reduced by providing specific thingId
s and fields
so that changes are only published if the thingId
match or a change was affected by a specified field
.
The subscription should and could be more fine-grained.
The idea is to support adding an optional filter
defined via Ditto's RQL syntax it already uses for the search.
That way the following subscription rules could be applied:
thingId
starts with org.eclipse.ditto:*
: like(thingId,"org.eclipse.ditto:*")
temperature
was affected by this change: exists(feature/temperature)
temperature
was greater 25: gt(feature/temperature/properties/value,25)
org.eclipse.ditto:*
affecting the temperature
: and(like(thingId,"org.eclipse.ditto:*"),exists(feature/temperature))
Currently a Thing in the search-index is not restored in the search-index when its policy is restored. This means that the Thing cannot be found by means of the Search API, thus it is available via the Things API.
Ditto's "live" channel is currently only available via the WebSocket binding.
Which means that via HTTP it is currently not possible to address a device listening on the "live" channel and answering with its live state or changing its live state.
The HTTP API for that could be really simple:
channel=live
to all existing /things
HTTP routeschannel=live
query parameter would use the default twin
perspective (the cached value) which should still be the default as the value is the cached (persisted) oneExample: Finding out whether a "lamp" is on:
# retrieve the last reported value:
GET /api/2/things/org.eclipse.ditto:fancy-lamp-1/features/lamp/properties/on
returns: true
# retrieve the live value from the device itself:
GET /api/2/things/org.eclipse.ditto:fancy-lamp-1/features/lamp/properties/on?channel=live
returns: false
# someone must have manually switched it off in the meantime
The failover parameter of the JMS client should be additionally configured with at least failover.initialReconnectDelay and failover.reconnectDelay. Otherwise it will reconnect after 10ms and the connection attempt fails as it takes more than 10ms.
In #60 the Feature Definition was added to the Ditto model.
#60 however did not yet enforce that the Properties of a Feature are enforced to follow the types defined in a Definition.
The idea of this issue is to add another Ditto microservice responsible for validating that the JSON of a Feature's Properties follow its defined Definition.
The Definition is interpreted as coordinates to an Eclipse Vorto "Function Block".
The Ditto team already contributed a generator to Eclipse Vorto which generates JsonSchema from a Vorto Function Block.
The new Ditto microservice could look up the Function Block at the public Vorto repo "http://vorto.eclipse.org", use its "generator HTTP API" in order to let the Vorto repo generate the JsonSchema and validate changes to a Feature against that JsonSchema.
In order to not always generate the JsonSchema, the generated schemas should be persisted into Ditto's database.
Currently the subjects of an authorization context are defined as a fixed string, which means you have to grant this subject access to all devices of a connection. To provide more flexibility we want to introduce placeholders in the authorization subject which are replaced before the signal is processed in Ditto.
For example you can define the authorization context of a connection as:
...
"authorizationContext": ["ditto:{{ header:device-id }}"]
...
The placeholder {{ header:device-id }}
is then replaced by the value of the device-id
header . If a placeholder cannot be resolved, e.g. because the specified header is missing, the message is rejected.
with a fresh clone of the initial check-in, I get the following compilation error:
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:35 min
[INFO] Finished at: 2017-10-06T17:26:31+02:00
[INFO] Final Memory: 104M/1130M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.5.1:compile (default-compile) on project ditto-signals-commands-batch: Compilation failure
[ERROR] /Users/kartben/Repositories/ditto/signals/commands/batch/src/main/java/org/eclipse/ditto/signals/commands/batch/ExecuteBatch.java:[155,45] unreported exception X; must be caught or declared to be thrown
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf :ditto-signals-commands-batch
Does Ditto have a logo? If not, let me know if you need my/EF's help setting up a crowd sourcing campaign to get one designed.
It would be nice to have a Ditto logo for the upcoming I4.0 white paper :-)
Currently the documentation focuses on the "TWIN" channel: https://www.eclipse.org/ditto/protocol-overview.html
MongoDB has a limit of 1024 bytes per index entry: https://docs.mongodb.com/manual/reference/limits/#indexes
In the search-db, we create lots of index entries (e.g. for attributes). In things-db we do not create such indexes, because we do not need extensive search functionality there.
If a thing-event (or whole thing in case of sync) contains a single value which is too big to index, the whole update-operation of the search-updater will fail. This leads to data inconsistencies.
Therefore, we should truncate values which exceed the MongoDB limit.
Hello. I'm trying to use hono and ditto. Hono and ditto already running on my computer. But i can't find any document Amqp connection. I found devops command but i can't access this api. Because it's in sandbox. I start docker on ditto/docker file and open my browser. I'm writing;
-localhost/8081 opened
-localhost/8081/api opened
-localhost/8081/devops not found
I start docker on ditto/docker/sandbox file and open my browser. I'm writing;
-localhost/8081
-localhost/8125
-ditto.eclipse.org
they can't opened.
how can i connect hono and ditto on amqp protocol and talking each other?
Provide a NOTICE file as described at http://www.apache.org/dev/licensing-howto.html,
Please include a google analytics tracker in the Ditto website and follow the instructions here https://wiki.eclipse.org/IoT/Community_metrics#Web_traffic to share access with Eclipse so that the stats can be consolidated in our monthly reports.
Thanks!
Benjamin -
We should think about what and how to provide basic documentation for Eclipse Ditto.
Currently we have a small "Readme" and a "Getting started" (both as .md files).
The documentation should contain chapters about:
The format of the documentation should be:
My proposal is to start with a list of GitHub flavored markdown files in a folder structure within "documentation" and use the GitHub repo for rendering / navigation.
Add commands required for building "live" commands/responses/events.
Please consider making docker images available e.g. on hub.docker.com for those that are interested to try out ditto, but do not want to set up a complete build environment for it.
When transmitting the protocol via Websocket or other channels, the content-types should be defined.
Define custom content type:
http://restcookbook.com/Resources/using-custom-content-types/
Hello Ditto team,
I would like to know what a role of things_snaps
table. There is very few document about that.
My data is grown too fast, in less than 24 hours, it was increase 70GB, almost data is store in things_snaps
table.
> db.getCollection('things_snaps').stats(1024 * 1024 * 1024)
{
"ns" : "things.things_snaps",
"size" : 225,
"count" : 38962,
"avgObjSize" : 6215555,
"storageSize" : 91
}
Does any configuration and a rule to restrict it?
I'm using Ditto version 0.3.0-M1
Thanks in advance
Currently we only support the "id_token".
We should however also support the "access_token" and then make a lookup at Google's "userinfo" API in order to find out the Google user-id.
As this is a HTTP request, we must cache the response for as long as the JWT session is valid.
Currently the amqp-bridge can connect to AMQP 1.0 endpoints (like for example Eclipse Hono).
We also want to support connecting to AMQP 0.9.1 brokers (e.g. the commonly used RabbitMQ) in order to add another interface for interacting with Ditto via the Ditto Protocol.
Regardless of the type of headers defined in MessageHeaderDefinition
, those headers are serialized as JSON strings. The headers TIMEOUT
and STATUS_CODE
have integer types and should be serialized as numbers.
Hi All,
I connected between hono and ditto on AMQP bridge.
sender code :
messageSender.send(HonoExampleConstants.DEVICE_ID, null, "{"topic": "appstacle/xdk_"+value+"/things/twin/commands/create/","headers": {},"path": "/","value": {"__schemaVersion": 2,"__lifecycle": "ACTIVE","_revision": 1,"namespace": "appstacle","thingId": "appstacle:xdk"+value+"","policyId": "appstacle:policy_deneme14","attributes": {"location": {"latitude": 44.673856,"longitude": 8.261719}},"features": {"accelerometer": {"properties": {"x": 3.141,"y": 2.718, "z": 1,"unit": "g"}}}}}", "application/json",
token, capacityAvail -> {
capacityAvailableFuture.complete(null);
}).map(delivery -> {
nrMessageDeliverySucceeded.incrementAndGet();
messageDeliveredFuture.complete(null);
return (Void) null;
}).otherwise(t -> {
System.err.println("Could not send message: " + t.getMessage());
nrMessageDeliveryFailed.incrementAndGet();
result.completeExceptionally(t);
return (Void) null;
});
Regards.
Using docker version 18.03.0-ce and docker-compose 1.21.00 the docker-compose.yml "command" configuration is not overwritten.
This cause a wrong startup sequence of services and crash of all ditto services.
The "command" line have to be replaced with "entrypoint" configuration:
command: sh -c "sleep 10; java -jar /starter.jar"
entrypoint: sh -c "sleep 15; java -jar /opt/ditto/starter.jar"
We currently use Akka Distributed Data for synchronizing a authorization-cache (thing-cache, policy-cache) between all services which either read (search, gateway) or update (things, policies) this data.
This works fine, but creates unnecessary network load because the whole cache is distributed to all shards, even if a shard does not need the data. Another disadvantage of the current approach is that the "Akka Distributed Data" cache cannot be cleared, which may cause memory problems in the long run.
In this task, we should at least address the following issues:
Caffeine seems to be a good choice for a local cache implementation: https://github.com/ben-manes/caffeine
The following internal documentation links are broken:
basic-connections.md
[Manage connections](/connectivity-manage-connections.html)
[Payload Mapping Documentation](/connectivity-mapping.html)
connectivity-manage-connections.md
:
[payload mapping](/connectivity-mapping.html)
[Connections](/basic-connections.html)
Kamon 1.0.0 was released recently: http://kamon.io/teamblog/2018/01/18/kamon-1.0.0-is-out/
We are still using version 0.6.x for monitoring in the Ditto services.
Update to 1.0.0 brings the huge benefit that we can use the Kamon Prometheus reporter: http://kamon.io/documentation/1.x/reporters/prometheus/
which leads to a more predictable "pull approach" for metrics in the cluster.
Currently, when running under high load, the metrics sent via UDP even make the networking load higher.
Hi All,
I want to get values of any twin with "_revision".
how can I do that ? or can this be ?
Regards
Mhumoglu
java.lang.IllegalArgumentException: The headers did not contain a value for mandatory header with key !
Hi, I see that you use minimal json 1.9.5. Can you point me to the CQ so that our project can piggyback and maybe get mj 1.9.5 in Orbit.
We cannot use Netty 3.10 due to licensing issues (as mentioned in CQ 16316).
As Netty 3.10 is used by default by Akka and that won't change (as mentioned in this Akka issue) we have to use the UDP (or TCP)-based Aeron remoting as a replacement (which is stable in our used Akka version).
Netty 3.10 must then be excluded from our dependencies.
The AMQP-bridge currently expects that all messages it receives are already in Ditto Protocol (JSON).
That means that if Ditto connects for example to Eclipse Hono, the AMQP-bridge can only make sense of data sent in our defined protocol.
That is a problem for various reasons:
Ditto therefore wants to support a first easy "mapping" of those payloads:
Our idea is to define mappings in JavaScript which is executed inside the JVM with as much sandboxing as possible.
Any other ideas?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.