pivotal-cf / aqueduct-courier Goto Github PK
View Code? Open in Web Editor NEWCLI component that collects telemetry from Tanzu Application Service
License: Apache License 2.0
CLI component that collects telemetry from Tanzu Application Service
License: Apache License 2.0
Pivotal provides the Gitbot service to synchronize issues and pull requests made against public GitHub repos with Pivotal Tracker projects.
If you do not want to use Pivotal Tracker to manage this GitHub repo, you do not need to take any action.
If you are a Pivotal employee, you can configure Gitbot to sync your GitHub repo to your Pivotal Tracker project with a pull request.
Steps:
If you are not a pivotal employee, you can request that [email protected] set up the integration for you.
You might also be interested in configuring GitHub's Service Hook for Tracker on your repo so you can link your commits to Tracker stories. You can do this yourself by following the directions at:
https://www.pivotaltracker.com/blog/guide-githubs-service-hook-tracker/
If there are any questions, please reach out to [email protected].
In order to ingest our telemetry data for our own consumption, the telemetry send
command should accept a --target
flag to shuttle the data to endpoints other than the Pivotal one.
Although the produced tarball could be consumed by us, being able to leverage the send
command let's us easily integrate our existing CI process to send telemetry data to both Pivotal and our own endpoints.
With that said, I do not have an endpoint to receive the data to yet, nor a system to marshal / process / visualize the data yet. We're currently working on some in-house tools, and as telemetry starts to get fleshed out, we'll start putting engineering time into it. I'm also not 100% sure what we'd do about the --api-key
yet, would have to do a little reverse-engineering of how things work. As a sidenote, if the strategy of telemetry is to have customers collect / visualize their own data, it could be beneficial to have some API contracts for how data will be structured & the structure of HTTP payloads, and/or publishing some sample data. It would also be beneficial to know a little bit about how the data is being processed so that customers can get ideas on how to use the data for themselves ๐
Thoughts?
thanks for your time
In order to no longer require storing the collected data tarball locally && then sending the tarball as a separate command, it may be beneficial to combine the collect/send process into one step (stored in memory, perhaps?) and not write anything to disk.
Not sure what the UX would look like on this? I like having the separation of telemetry send/collect
... but if we had something like telemetry collect-and-send
(telemetry push
? I'm horrible at naming), we wouldn't need to pass the tarball around as a separate Concourse task (which isn't a huge deal)... But more interestingly, we could run telemetry in difference places, like as a scheduled task on PCF itself... Which I guess we could already do, but may just be more convenient to have it as a single command which doesn't write anything to disk.
As you might deduce, this is definitely a low-priority issue... In fact, the more I think about this the more I think it's not that valuable of a feature and is a "meh" use-case. But I've already written this issue up, so I'll leave it up and close it real soon ๐
Thanks for your time!
I'm very curious. Can you tell us a little bit about what this project is?
Looks like a tool to configure opsmgr + tiles via externalized git config?
In order to provide accurate telemetry data across multi-cloud configurations, a PCF operator should be able to specify the IaaS type in their telemetry data (AWS, vSphere, Azure, etc.)
For example, you're able to specify the environment (qa, development, pre-prod, prod) but not able to specify the IaaS (AWS, Azure, GCP, etc.).
Perhaps I'm misunderstanding how telemetry data aggregation works-- maybe this already happens automatically?-- but I'm concerned that if I send my AWS development
foundation telemetry, and I send my Azure development
foundation telemetry, you'll lose the fidelity of which IaaS the data belongs to.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.