Giter Club home page Giter Club logo

gallagher's Introduction

      ___           ___           ___           ___           ___           ___       ___     
     /\  \         /\__\         /\  \         /\__\         /\  \         /\__\     |\__\    
    /::\  \       /::|  |       /::\  \       /::|  |       /::\  \       /:/  /     |:|  |   
   /:/\:\  \     /:|:|  |      /:/\:\  \     /:|:|  |      /:/\:\  \     /:/  /      |:|  |   
  /::\~\:\  \   /:/|:|  |__   /:/  \:\  \   /:/|:|__|__   /::\~\:\  \   /:/  /       |:|__|__ 
 /:/\:\ \:\__\ /:/ |:| /\__\ /:/__/ \:\__\ /:/ |::::\__\ /:/\:\ \:\__\ /:/__/        /::::\__\
 \/__\:\/:/  / \/__|:|/:/  / \:\  \ /:/  / \/__/~~/:/  / \/__\:\/:/  / \:\  \       /:/~~/~   
      \::/  /      |:/:/  /   \:\  /:/  /        /:/  /       \::/  /   \:\  \     /:/  /     
      /:/  /       |::/  /     \:\/:/  /        /:/  /        /:/  /     \:\  \    \/__/      
     /:/  /        /:/  /       \::/  /        /:/  /        /:/  /       \:\__\              
     \/__/         \/__/         \/__/         \/__/         \/__/         \/__/              

gallagher's People

Contributors

dependabot[bot] avatar devraj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

gallagher's Issues

LLM integration to search for visits and other events

OpenAI has an official Python library and would lead to some interesting integrations if we can index the synchroised database to be indexed and the user can pose questions like:

Dev Mukherjee's last visit to the pool

or event reporting such as:

How many times does Dev Mukherjee goto the pool a month

Design

The first portion of this would be to analyse how the OpenAI platform works from an API standpoint and if it's useful to present this as part of the cli and tui interfaces.

Note that the API key will have to be vendored in by the user of the tool, but this is ok as the interfaces are designed for an advanced user.

Once we are comfortable with what the OpenAI API does, then we can expand on the specific requirements of this ticket.

Technical considerations

Not that this should make a difference but @openai use rye as the package management tool for the Python package.

See ticket where we are moving to hatch

Raise `warnings` if an operation has an unintended use

Python provides warnings package for issuing messages in situations where it is useful to alert the user of some condition in a program, where that condition (normally) doesn’t warrant raising an exception and terminating the program.

For example, one might want to issue a warning when a program uses an obsolete module.

Setup automated publishing for pypi

pypi has a Github action to publish software releases on pypi. At the moment we are already running tests using Github actions which runs the tests against a Cloud hosted Command Centre (running in AWS).

The suggested release lifecycle should be

  • A developer has a pull request for the next set of feature changes
  • All tests run and pass
  • Documentation is compiled and published using the Github action
  • A version is tagged using semver conventions
  • Release notes are published as a Github release
  • Upon the release going out a job is triggered to publish the package onto pypi
  • Update local tags and documentation with the links (if applicable)

Provide a SQL (preferably a SQLAlchemy `dialect`) interface to query the REST API

As our ambition around the library grows, the sync feature #9 looks to be a lot more useful than I previously imagined. One of my thoughts was to treat this as if they were two databases sources and use a layer like SQLAlchemy to keep them in sync.

Sync is a difficult problem (see also SQL Sync for Schema with SQLAlchemy
python-sync-db) at the best of times. My initial research lead me down the path of writing a SQLAlchemy dialect primarily thinking about if it's possible to wrap a REST service as a SQL source.

I found a number of articles and discussions which leads to believe this is a possible way forward:

Digging through the list of external dialects I found betodealmeida / gsheets-db-api which provides that we can converse with a REST API via a SQLAlchemy dialect. This project links to Shillelagh which is a library is an implementation of the Python DB API 2.0 based on SQLite (using the APSW library)

Other resources:

  • Dolt version controlled MySQL database

The requirement is thus to research the above resources and outline the possibility of using a SQLALchemy dialect to write the sync module with the view of being able to use the REST client to communicate with the Gallagher proxy.

Why SQL?

One of the major questions around this is Why SQL?, we could head down the route of using an object database. The question to consider is if the end user / customer would benefit from the data being available in an object database?

With a foundation of writing to SQL backend we also stand the advantage of writing to / reading from corporate database backends.

Outline a callback pattern for implementing long poll methods

Various entities provide a long poll mechanism to get changes as they occur on the server (I assume this is due to the lack of webhooks, which would be difficult to proxy in the current environment, and for clients like what we are building long poll would make sense in certain use cases).

The endpoints generally seem to return a response if there are any, otherwise return a 400 and hang up.

This ticket is to study these endpoints like cardholder, alarms, etc and determine a pattern so all of them can follow the same design principles.

Investigate `logfire` integration at an `httpx` and `sqlalchemy` level

Is your feature request related to a problem? Please describe.
@samuelcolvin and team @pydantic have released logfire which is a log observability tool built on top of pydantic.

It supports usage with httpx and sqlalchemy, this ticket is to request a review of the functionality it provides and see if there's value in integrating it into the library.

Describe the solution you'd like
The idea came to me from use cases around using this library for commercial products and that we could benefit from get some observability as the applications scale up.

Key points to consider:

  • Authentication against the logfile service
  • Make it an opt-in function because of performance and the fact that you need to be part of the service
  • Make it an optional endpoint so the user does not have to vendor logfile in by default

Describe alternatives you've considered
NA

Additional context
Study the logfile implementation to learn about async use of logging messages with the service without affecting the performance of the framework.

Integrate standard python logging across the library and applications to ease debugging

Is your feature request related to a problem? Please describe.
I should have started on this earlier but better late than never, this request is to integrate logging to ease debugging across the library and application.

While we have unit testing across the SDK which ensures proper operation, there are now use cases where we have programatic and user interfaces e.g SQL and CLI/TUI.

In cases where we are building an adapter for shillelagh we comes across the need to enable logging as the interactive shell hides all error messages.

import logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s')

Describe the solution you'd like
Where possible add logging.DEBUG messages so when the developer/user is running the library with verbose logging enabled they can see what is going on under the hood.

There are various lifecycle use cases where the order of operation matters, for example the _discover method should only be called one the environment variables with the API key is available to the environment.

Other examples are __config__ for endpoints are automatically populated upon the discovery running, and we need to validate that this has happened.

While this does not cause an error so to speak, it does cause the SDK to have unexpected behaviour and debug messages will assist in getting to the bottom of this quicker.

Describe alternatives you've considered
NA

Additional context
Please add this to the list of things that needs to be documented for developers of the SDK.

Consider moving to `asyncio` before the library gets too large

We're getting to a stage where the library design and patterns are maturing. Tests are running consistently and passing as expected.

It would thus be time to full flesh out the entire library before we move into some enterprise features and use this in integrations and products.

Most of Anomaly's projects have moved to using asyncio and it would thus be unwise to not make this library have asyncio support. We would have to consider:

Forward compatibility (HATEOAS) - Discover all endpoints dynamically from `api` response

According to the documentation we should not reference /api/items and some other endpoints statically, this should instead be discovered from the features.items.items.href attribute of the /api response.

To find an item, pass a substring of its name to the link at features.items.items.href in the results of a call to /api. If you are sure of its name, place the name inside " quotes, and it will use a full string match. Both types of search are case-insensitive.

This ticket is to refactor the current implementation to add discovery.

As the Gallagher documentation states this is for HATEOAS compatibility:

This is a self-referencing REST API that follows the principles of HATEOAS. Other than the initial GET to /api when it first connects, your source code should not contain any URLs, as they are subject to change. You should append the query parameters this document describes for operations such as filtering and searching, but everying in the path should come from the results of /api or pages linked from it.

/api only shows licensed API calls.

Be prepared to append query parameters to URLs that already have their own: do not assume that you can simply add a question mark and your parameters.

Separate interfaces and API client to be separate installables

Since the initial project was started I've expanded the scope to include a #13 and now #25 terminal based interface to interact with the Gallagher ecosystem.

While this is super cool we should pollute the install base with unnecessary items if the all the user wants is the API client.

We propose to split the installables into three (or as many as required into the future):

  • gallagher, installs the API client only
  • gallagher[cli], installs the CLI which will require the API Client
  • gallagher[gui], installs the textual based interfaces
  • gallagher[all] installs everything we offer

Note this requires a refactor of the pyproject.toml file

Example from @taskiq-python projects can be found here

Support retrieving partial responses from endpoints

Endpoint such as searches allow providing a parameters called fields which is an enum of fields which the response ends up returning e.g href, id, name, type, division, serverDisplayName, notes

The string must not contain any spaces. Just alphanumerics, underscores, commas, and dots.

This would mean we would have to make all fields in our Response objects optional.

We should also validate that the input is in fact an acceptable response field name.

Migrate to `pyndatic2`

Pyndatic has had it's first major release upgrade which is a break change. We should upgrade to v2 and then change the following:

  • Config classes are now deprecated
  • Dsn no longer inherit from str

Provide a `textual` based terminal user interface for the command centre

A completely unnecessary but cool request but following from #13 working so nicely and @Textualize offering cool libraries like trogon and textual I thought would't it be nice to provide an additional terminal based interface which isn't exactly a CLI but not a traditional GUI.

Moreover this will extend out Python skills on the CLI and allow us to value add the library.

The why of this request really is around bringing richer CLI tools for the Gallagher ecosystem.

Note that this request and the CLI should be additional poetry installables. The API client should be a product of its own.

Design a mechanism to parse Personal Data Fields (PDF) from API responses for Cardholders using `pydantic`

Gallagher's REST APIs return configurable fields (per instance of the server) called Personal Data Fields (PDFs). These are additional fields that are relevant to the organisation and are not common. These key of the fields are always prefixed with an @ symbol e.g @Student ID in an example from their docs:

{
  "href": "https://localhost:8904/api/cardholders/325",
  "id": "325",
  "firstName": "Algernon",
  "lastName": "Boothroyd",
  "shortName": "Q",
  "description": "Quartermaster",
  "authorised": true,
  "lastSuccessfulAccessTime": "2004-11-18T19:21:52Z",
  "lastSuccessfulAccessZone": {
    "href": "https://localhost:8904/api/access_zones/333",
    "name": "Twilight zone"
  },
  "serverDisplayName": "ruatoria.satellite.int",
  "division": {
    "href": "https://localhost:8904/api/divisions/2"
  },
  "@Student ID": "8904640"
}

We are using pyndatic schemas to parse the API response. The aim would be to use event based methods to dynamically parse these and make them available as a dictionary:

cardholder.pdfs["StudentId"]

for the API user to access these fields in a Pythonic manner.

Note that the response has a field called personalDataDefinitions which contains references to each one of the PDF definitions.

Move to `hatch` as a python project tool

I've been tracking if poetry is suitable for our projects (specially as it comes to managing libraries) - don't get me wrong poetry does wonders.

This is based on the analysis of what some of our major library vendors are using, and where the ecosystem is going in terms of a PEP665 compliant tool.

I have decided to move our projects to hatch and this is a ticket to move this project (first amongst anomaly projects) to hatch.

Note that we should produce substantial documentation on how we moved from poetry to hatch.

See also:

Design pattern for `next`, `previous`, and `updates` endpoints

The library now uses async methods, hence we should be able to design a nice pattern that allows us to follow next, previous, and updates endpoints.

These are available as part of a response, hence we should be able to ask the response to follow the responses in either direction.

The real challenge here is that the URLs are dynamically generated so we have to discover the URLs once the original response is received.

Introduce `before` and `after` hooks in API endpoints

The API endpoints inherit from a Base handler which lets the handlers configure handler specific context and for the rest of it handlers the behaviour of its own.

In certain contexts for example discovery of URLs we need to run operations to prepare the context.

This is a proposal to introduce a before_handler and after_handler hooks that run pre and post the main handler runs.

Refactor (if required, and doesn't break anything) the use of `reserved` keywords as `attribute` names in `pyndatic` models

Describe the bug
While this works, it's not best practice to use reserved words as attribute names in classes. Consider this class:

class AlarmSummary(
    AppBaseModel,
    HrefMixin,
    IdentityMixin,
):
    time: datetime
    message: str
    source: AlarmSourceSummary
    type: str

type is a reserved word in python and is used as an attribute in the class. IDEs will prompt that this is the wrong thing to:
Screenshot 2024-05-01 at 10 03 06 AM

Pyndatic has support for TypeDict and dataclasses for defining classes, we should look into what is the best way to implement this without losing the developer focus and pythonic nature of the API class.

To Reproduce
NA

Expected behavior
Implement a proper way

Provide a runnable TUI demo interface via `textual-web` for users to try against the Anomaly license

For users to be able to try the TUI without having to install anything it would be cool if we can provide a web accessible version. textual-web is designed to this and is currently in beta

Technical consideration

We should first run a demo to see what infrastructure we need to run a tui application (e.g do we need to run a deamon or process of some sort?)

Ideally we should be able to run this on a service like vercel or railway to minimise any infrastructure management overhead.

From what I can see that textual-web runs a proxy that is accessible via their web service (pending further investigation)

~/.local/bin/textual-web --run "gallagher.tui:main"

The demo license is provide for Anomaly to use, so we would have to consider some of rate limiting or even a way for the user to limit interaction. This might even involve a mechanism to reset the virtual machine running our command centre to a good known state every 24 hours?

Ensure that the API key is never exposed as part of the demonstration.

Other thoughts

This would definitely be a very good demonstration of the #9 and #27 features without the user having to set anything up.

Handle feature not licensed errors

Gallagher API licenses features by the piece, if a feature isn't available the server responds with a 403 with the following json:

HTTP/1.1 403 Forbidden
Cache-Control: no-cache
Content-Length: 34
Content-Type: application/json; charset=utf-8
Date: Sat, 10 Jun 2023 05:08:12 GMT

{
    "message": "Feature not licensed"
}

the API client should have the ability to handle these responses and throw an exception

Provide a set of examples references from the documentation

Is your feature request related to a problem? Please describe.
It would be good to have runnable examples for the end user to be able to almost copy and paste and run on their console. Particularly when it comes to using the API client.

Describe the solution you'd like
Inspired by the developer experience provided by companies like Stripe who do an excellent job at providing accurate examples of runnable code.

While we reference examples in our documentation we should also maintain a folder of examples which the developers can vendor in.

Note that we have a strong set of tests that we can draw from and the cli demonstrates the use of most of the endpoints.

The aim of this is to extend these working examples and keep them in sync.

A point to consider is how we are going to keep these up to date as the project evolves.

Describe alternatives you've considered
NA

Additional context
See projects shillelagh for examples

Create a CLI to interact with Gallagher Command Centre

While building the API, I found myself constantly making http requests to test payloads. I have also been moving a lot of my workloads to the command line for services like stripe or github via their CLI.

I also have a set of httpie based payloads that I use to support some of our clients e.g:

echo -n '
{
  "accessGroups": {
    "add": [{
      "accessGroup": {
	    "href": "https://commandcentre-api-au.security.gallagher.cloud/api/access_groups/1052"
      },
      "from": "2023-04-11T10:30:00Z",
      "until": "2023-05-11T10:30:00Z"
    }]
  }
}' | http patch https://commandcentre-api-au.security.gallagher.cloud/api/cardholders/4149 "Authorization: GGL-API-KEY $GH_API_KEY"

the above adds an access group to a cardholder

The proposal is to build upon the REST API client and provide a cli to interact with the command centre to perform various operations.

A sample of what the user would be able to do would look like:

gl cardholder search devraj

or

gl cardholder get 3222

Add social cards for mkdocs

Is your feature request related to a problem? Please describe.
Nope, just a quick enhancement to the mkdocs configuration.

Describe the solution you'd like
We already have assets for our social presence, we should add these to the mkdocs configuration:

Describe alternatives you've considered
NA

Additional context
This is very particular to mkdocs-material.

Thanks you @squidfunk for mkdocs-material and @james-willett for the tutorial

Support idempotency for safely retrying requests without accidentally performing the same operation twice

All requests should be idempotent, to ensure that failed requests can be retried without accidentally repeating the same request.

This can be achieved by adding an idempotency_key to each requests and the API requestor ensuring that two requests with the same ID are never in flight at the same time.

If the user of the API client fires multiple requests of the same nature they will be assigned a separate idempotency_key and a duplicate request will be sent.

The intent of this feature is to ensure that the API client is behaving as expected and duplicate requests are intentional on an application level.

See also #18

Provide a set of utility scripts that populates developer data via the REST API

Is your feature request related to a problem? Please describe.
For various use cases like:

  • Visits
  • Alarms

it would be terribly handy to have a set of sample data that would make the system behave like there has been activity.

We can populate this via the Command Centre interfaces but it would be handy to automate this.

Post population of the database we could snapshot the database and restore it to state which would save repopulating this data.

Describe the solution you'd like
Write and make available a set of scripts that builds on top of the REST client to populate a logically set of data, ideally that builds on top of the data that the Command Centre comes with.

  • Query what is already part of the Command Centre
  • Use references from that to populate the events

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context

  • Allow this to be run as a task
  • Make this a deliberate task that the user has to run

Make the list of fields in a response configurable

Is your feature request related to a problem? Please describe.
Many of the endpoints allow providing a list of fields to the request, which varies the size of the response payload (see alarms). We should support this as part of the API calls.

Describe the solution you'd like
At the moment the pyndatic models expect all of the fields to exists, we will have to study how to temporarily relax the validation requirements of pydantic models. I would rather not have to duplicate the models for the two use cases.

Provide a pythonic manner to provide the list of fields (rather than strings), again we might have to lean into the pydantic models and how they make the meta data available to the application.

Describe alternatives you've considered
NA

Additional context
NA

Add `--json` option to cli tool to output structure data, also consider `--csv` and `--markdown`

Is your feature request related to a problem? Please describe.
While we have the CLI presenting human readable data to our users, it can be useful to get structured json data for other tools to work with.

Describe the solution you'd like
Add a --json response as described here to allow users to publish the output from the cli as json

Question, should we support --csv as well?

Describe alternatives you've considered
NA

Additional context
clig.dev wisdom

Either memoize responses of use a caching library (like `hisel`) to boost performance

See also hisel built on top of httpx for caching responses. This might be more apt? https://github.com/karpetrosyan/hishel


While implementing a solution for #5 and #8 which both depend on discover of the API endpoints and thus a performance hit on calling the API endpoints unnecessarily, I implement a memoized property for the API endpoint discovery.

This lead me to think if this would be helpful in REST client being able to cache information. To implement this truthfully we would have to:

  • Check if the command centre is able to provide information i.e Etag or the likes to denote if the cache is stable
  • Outline an appropriate caching technique either in memory or writing to disk (the later can be more complicated than we need to)
  • How this might effect something like #9

Provide `brew` (or other installer) packages for installing the the `cli` and `tui`

Is your feature request related to a problem? Please describe.
Since the expansion of the scope of the project to provide a cli and tui we should consider providing packages for users to conveniently install and use the applications without having to worry about the rest client package.

Describe the solution you'd like
Research which package managers we should support across operating systems (please update this list as ones are considered):

  • [brew] under macOS
  • Docker container (cross platform) published via ghcr

This could be one where we can enlist community contributors (give the variety of package managers across operating systems) to help with managing the packages.

Describe alternatives you've considered
NA

Additional context
NA

Reorganise DTO to `refs`, `summary`, `detail`, `response` packages to avoid circular dependency issues

Consider this cdaa0b2 commit where I have had to disable imports of Refs from the alarm package in event, this is because the nature of the data is that they cross reference each other, that is to say that Alarms are Events, and Event Summary can be that of an alarm.

This will become a bigger problem as the library is built out, we will have several cases of cross references.

It's thus suggested that we refactor the code base and move all references to dto/refs.py package to avoid circular dependencies.

Investigate the use of `keyring` to store API Keys

Is your feature request related to a problem? Please describe.
Ideally the gallagher token is provided by other mechanism (e.g environment variable) that is set by the user. Note that the API Key is generated by the Command Centre which is then passed as headers to the command centre (or via the proxy)

Describe the solution you'd like
Consider the use of a package like keyring to store credentials from the Gallagher command centre.

More importantly we should debate if this is the right approach (the consideration here is that we have tui and cli applications which are public facing as opposed to developer centred).

Describe alternatives you've considered
NA

Additional context
Given the way keychains behave in Windows, we might have to provide substantial documentation for users to get this working properly.

Synchronisation of command centre data with a local data source (preferably a SQL backend)

Consider the Cardholder changes endpoint which essentially provides a list of events that occurred against a cardholder. These sorts of endpoints provide valuable information for third party integrations such as the ability to provide a history of when a customer has accessed a door.

Gallagher cloud does not provide the ability to add webhooks to receive events as they occur (which probably makes sense from a deployment and possibly security perspective).

Built on top of the base API client we should provide the ability to synchronise with the command centre, these would have to be written to a backend to keep a shadow of what is available on the command centre. The ida of this is not to query the command centre when the application is rendering interfaces.

This ticket has to be expanded with the details, possibly requires an RFC.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.