Giter Club home page Giter Club logo

open-product-recovery's Introduction

Open Product Recovery

This is the monorepo for the reference implementation of the Open Product Recovery standards. Open Product Recovery is a collection of standards and implementations designed to facilitate sharing of excess products with charitable (and secondary market) organizations.

What is Open Product Recovery (OPR)?

Open Product Recovery is a specification for describing and communicating about excess food (and other donatable products). OPR servers can publish offers of donatable products to other OPR servers, and every OPR server hosts a public REST API to allow other organizations to accept those offers.

Is OPR useful for my organization?

Yes! If:

  1. Your organization generates enough donatable products that you need a way to automatically tell charitable organizations that you have donations for them.
  2. Or, your charitable organization needs to receive enough donatable products that you need a way to automatically discover donations.

Ideally, your organization already has an inventory system that tracks the donations you work with. OPR has a flexible plugin system that allows it to read and write offers to and from just about any storage system with an API.

What is in this repository?

This repository contains a number of nodejs projects that work together to implement the OPR specification. Please read the README file in each of the subdirectories to find out more about the libraries in each component.

How do I get started?

Join the Discord

Join the discussion at http://chat.opr.dev/ for support.

Install deps and build

We recommend Node versions >=18.x, due to compatability issues with Lerna.

First, run the setup script to install packages and compile the workspaces. Note that there are a few known issues with the install process that the setup script is designed to overcome. We highly recommend you use this for your package installation and setup.

sh ./setup.sh

Run the unit tests

Run

npx lerna run test

from the root directory to run all the unit tests. If you want to run the Postgres tests in opr-sql-database, you need to have a working installation of Postgres and the initdb, postgres and psql commands must be in your PATH environment variable. If those commands aren't available, the Postgres tests will be skipped.

Devcontainer

If you use VScode, install the devcontainer extention as well as docker desktop. VS code will helpfully ask if you'd like to re-open the project in a devcontainer, which should, after downloading and setting up a whole bunch of stuff, drop you right into a prompt that is ready to run npx lerna run test

Read the standards

This library is the reference implementation for the Open Product Recovery standards. You may want to read the standards docs before you dive into setting up your own server.

Start configuring your own OPR server

Check out the examples folder for example working setups for various environments. A good place to start is the local-starter folder. The local starter example uses an sqlite database and includes some special endpoints to help generate fake offers so that its easier to see common settings for a server and how they work.

Once you're fluent with the examples, learn more than you ever wanted to know about OPR extensibility in our Integrations Guide.

open-product-recovery's People

Contributors

blingdahl avatar chrisellis-bcgdv avatar cromulus avatar dependabot[bot] avatar jingjin-han-bcgdv avatar johndayrichter avatar jonathanmgrimm avatar mryckman avatar rvenables avatar ryckmanat avatar sam-gram avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

open-product-recovery's Issues

Reorganize root to "Components" and "Examples"

Early on, we began separating everything into separate "opr-" folders based on the thinking that we might split these things out as separate repos at some point. Now, with our "monorepo" approach, it probably makes more sense to simplify the root structure. We discussed adding a "components" folder in src that would house subfolders for each independent npm package, and an "examples" folder that would house example implementations.

  • Add "components" folder to root
  • Add "components/platforms/" folder
  • Add "examples" folder to root
  • Move opr-models, opr-core, opr-sql-database, opr-dev-tools to components/
  • Move opr-google-cloud to components/platforms/
  • Move opr-example-serve and opr-example-server-gcp to examples/
  • Rename opr-example-server folder to "local-starter"
  • Rename opr-example-server-gcp to "gcp-cloudrun-postgres"

Break up the testing workflow into multiple independent steps

Change the lerna.yml workflow so that testing steps can be run in parallel. At
present, it usually takes between 10 and 20 minutes to run all the preflight checks.

Right now the process is:

  1. Bootstrap the project without hoisting
  2. Clean the project
  3. Bootstrap the project again with hoisting
  4. Run the unit tests
  5. Run test-docker

And all of these steps are run for all 3 versions of node.

This runs a number of steps in series that could be attacked in parallel, either by clever use of the matrix feature or (perhaps more clearly) breaking them into independent workflows.

ACL listing in examples/gcp-cloudrun-postgres is broken

Expected Behavior

StaticServerAccessControlList accepts '*' characters, as listed in the cloudrun postgres example

Actual Behavior

'*' is not valid for StaticServerAccessControl list, and all requests are rejected

Steps to Reproduce the Problem

  1. Just run examples/gcp-cloudrun-postgres and watch all your requests fail

Specifications

  • Version: all
  • Platform: all

Instructions for deploying the container in the GCP example fails on M1 Mac

Expected Behavior

The ReadMe for the gcp-cloudrun-postgres example should work on an M1 Mac

Actual Behavior

When deploying the docker image on GCP, it will fail to connect.

Steps to Reproduce the Problem

1.Build the docker container as per ReadMe
2.Push docker container as per ReadMe
3.Deploy container

Specifications

  • Version:
  • Platform: MacBook Pro with Apple M1 Pro chip

To fix this, the docs need to be updated to include a platform argument for linux, i.e.:
docker build --no-cache --platform linux/amd64 -t gcr.io/appert-test-0/example-server .

Update copyright notices in all docs

Current notice:

/**
 * Copyright 2022 Google LLC
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

New notice:
TBD

Clarify/simplify "OprClient" setup in OprServer

An OprClient object facilitates requests to a remote OPR server. When setting up an OprServer object, there is an option for "client" that takes boolean | OprClient | UrlMapper. This allows a user to basically say: "build a default client for me, or just don't use a client (true or false)"; or for them to fully build their own client (OprClient); or for them to send in a UrlMapper, which then replaces just that part of an otherwise defaulted OprClient the server can create.

This implementation is a little confusing and it would be nice to clarify and simplify this part of an OprServer instantiation.

Create an easier-to-use client for integrations

Currently, integrators need to use several different classes to build their integrations, and those classes require the developer to understand which operations are local and which are remote. We should add a new client API that is passed to integration code to handle common operations simply.

Change API to allow one OfferProducer per organization corpus

Expected Behavior

The API should be structured so that there can be at most one offer producer per organization, since each offer producer generates the full set of offers for an organization.

Actual Behavior

The API allows any number of offer producers to update any number of corpuses.

Steps to Reproduce the Problem

  1. Install multiple offer producers that producer offers for the same organization
  2. Note that offer producers overwrite each other's collections for any given org.

Specifications

  • Version: all
  • Platform: all

Schema compiler sometimes outputs the same type twice, leading to compilation errors.

Expected Behavior

types.ts contains each type definition exactly once.

Actual Behavior

The same output type may occur multiple times.

Steps to Reproduce the Problem

Add a file named jsonpatchop.schema.json with contents:

{
  "$id": "jsonpatchop.schema.json",
  "$schema": "http://json-schema.org/draft-04/schema#",
  "comment": "Copied and reformatted from source at https://json.schemastore.org/json-patch.json",
  "description": "A JSON Patch operation",
  "examples": [
    {
      "op": "replace",
      "path": "",
      "value": {}
    }
  ],
  "title": "JSONPatchOp",
  "oneOf": [
    {
      "additionalProperties": false,
      "required": ["value", "op", "path"],
      "properties": {
        "path": {
          "$ref": "jsonpath.schema.json"
        },
        "op": {
          "description": "The operation to perform.",
          "type": "string",
          "enum": ["add", "replace", "test"]
        },
        "value": {
          "description": "The value to add, replace or test."
        }
      }
    },
    {
      "additionalProperties": false,
      "required": ["op", "path"],
      "properties": {
        "path": {
          "$ref": "jsonpath.schema.json"
        },
        "op": {
          "description": "The operation to perform.",
          "type": "string",
          "enum": ["remove"]
        }
      }
    },
    {
      "additionalProperties": false,
      "required": ["from", "op", "path"],
      "properties": {
        "path": {
          "$ref": "jsonpath.schema.json"
        },
        "op": {
          "description": "The operation to perform.",
          "type": "string",
          "enum": ["move", "copy"]
        },
        "from": {
          "$ref": "jsonpath.schema.json",
          "description": "A JSON Pointer path pointing to the location to move/copy from."
        }
      }
    }
  ]
}

Replace the definition of jsonpatch.schema.json with:

{
  "$id": "jsonpatch.schema.json",
  "$schema": "http://json-schema.org/draft-04/schema#",
  "comment": "Copied and reformatted from source at https://json.schemastore.org/json-patch.json",
  "description": "A JSON Patch array",
  "examples": [
    [
      {
        "op": "replace",
        "path": "",
        "value": {}
      }
    ]
  ],
  "items": {
    "$ref" : "jsonpatchop.schema.json"
  },
  "title": "JSONPatch",
  "type": "array"
}

Run `npm run compile' in opr-models.

Specifications

  • Version: any
  • Platform: any

Break out local vs. remote event types

Today, we're a little inconsistent about event types. ADD, UPDATE, and DELETE fire when those happen on either the remote server or the local server. We also have "ACCEPT" for when an offer from the local node has been accepted, and "REMOTE_ACCEPT" for when the local node itself has accepted an offer from somewhere else.

These should probably be tweaked a bit so that we separate out local vs. remote events and are consistent with them all. "LOCAL" could prefix things involving an offer from the current/local host, whereas "REMOTE" could prefix events that happen with offers from other hosts.

Perhaps a set like this:

LOCAL_ADD: Local host/tenant creates a new offer (like through one of its offer producers)
REMOTE_ADD: A new offer is discovered on a remote node (like by calling /list during ingestion)
LOCAL_UPDATE: Local host/tenant updates an existing offer
REMOTE_UPDATE: A remote offer has been updated
LOCAL_DELETE: Local host/tenant deletes one of its offers
REMOTE_DELETE: A remote offer has been deleted
LOCAL_ACCEPT: An offer on the local host/tenant has been accepted by a different node/tenant.
REMOTE_ACCEPT: The local host/tenant has successfully accepted an offer from a different host.
LOCAL_REJECT: An offer on the local host/tenant has been rejected by a different node/tenant.
REMOTE_REJECT: The local host/tenant has successfully rejected an offer from a different host.
LOCAL_RESERVE: An offer on the local host/tenant has been reserved by a different node/tenant.
REMOTE_RESERVE: The local host/tenant has successfully reserved an offer from a different host.

Create a Google Cloud KMS-based signer

Google Cloud projects should use Cloud KMS for digital signatures so that users do not have to generate or store private keys. Cloud KMS keeps them entirely secret, which should be the best practice for our system.

The relevant methods are documented here:

https://cloud.google.com/kms/docs/create-validate-signatures

And important caveats for JWTs are documented here:

Add Azure Reference Implementation/Example

It might be helpful to have a vetted implementation that lets users quickly start deploying an OPR server to Azure, similar to the existing Google Cloud implementation.

Useful components appear to include:

  • Minor Adjustments to Improve Support for Azure SQL
  • Azure-Specific JwksProvider and Signer Implementations (Using KeyVault)
  • Azure Container App Example Server
  • Bicep Templates for Resource Creation
  • Related Tests
  • Associated Documentation & Cost Estimate

Add support for push updates

Add a push/ method to the specification that allows a SNAPSHOT or DIFF to be pushed to update the contents of an organization's offer corpus. This feature would require:

  • Ability to accept SNAPSHOT or DIFF updates
  • Ability for one organization to push on behalf of another to support multi-tenant cases?
  • A new ACL for push updates (possibly allowing one org to impersonate another)
  • Documentation updates

Add Packaging Types to Data Standard

It would be useful to represent additional packaging types in the data standard. Specifically, additional hypothesized useful types include:

  • Case - Products are in a case
  • Drum - Products are in a drum
  • Pail - Products are in a pail
  • Pallet - Products are in a pallet
  • Tote - Products are in a tote
  • Tub - Products are in a tub

Resolve docker build issue with opr-example-server

Expected Behavior

Running docker build -f Dockerfile.dev -t example-opr . from the latest related readme results in a successful docker image build.

Actual Behavior

The docker build fails:

...
[7/7] RUN npm build:
#11 0.304 Unknown command: "build"
#11 0.304
#11 0.304 Did you mean this?
#11 0.304 npm run build # run the "build" package script
...

Steps to Reproduce the Problem

  1. Change to the opr-example-server directory (cd opr-example-server)
  2. Run the docker image build (docker build -f Dockerfile.dev -t example-opr .)

Specifications

  • Repository Version: ad38374
  • Platform: macOS Monterey on MacBook Air (m2)
  • Docker Source Image Version: Latest (Implicit), a7434a1bf94a
  • Node Version (Inside Docker Container): v19.0.0 (docker run -it --rm node /bin/bash -c 'node --version')
  • Docker (Client) Version Details:

Client:
Cloud integration: v1.0.29
Version: 20.10.20
API version: 1.41
Go version: go1.18.7
Git commit: 9fdeb9c
Built: Tue Oct 18 18:20:35 2022
OS/Arch: darwin/arm64
Context: desktop-linux
Experimental: true

Server: Docker Desktop 4.13.0 (89412)
Engine:
Version: 20.10.20
API version: 1.41 (minimum version 1.12)
Go version: go1.18.7
Git commit: 03df974
Built: Tue Oct 18 18:18:16 2022
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: 1.6.8
GitCommit: 9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6
runc:
Version: 1.1.4
GitCommit: v1.1.4-0-g5fd4c4d
docker-init:
Version: 0.19.0
GitCommit: de40ad0

Build a Firestore-based implementation of OprDatabase

Depends on #23

Create a Firestore-based implementation of OprDatabase. Firestore is cheaper to run and simpler to configure than Cloud SQL, and might be a better alternative to a SQL database for many Google Cloud customers.

Document Interfaces

Add docs for what they are, what each function does, and the parameters/return values.

Add frontendConfig data to IntegrationApi

Within an integration, it can be helpful to know the basic information on the current server's configuration. It would be helpful if the data from the server's frontendConfig were exposed through the IntegrationApi.

Pass a handle to the integrationApi when calling OfferProducer.produceOffers

Background: Sometimes OfferProducers need to store additional data about an offer that is outside of the OPR offer spec, especially mapping data between the internal representation of an offer in an inventory system and the public representation of the offer in OPR. This could be done very easily using the IntegrationApi.storeValue() (and related) methods, but these are not available to offer producers.

We should pass a handle to the integration API to OfferProducer.producerOffers so that offer producers can take advantage of the key/value store if they need it.

Add Version OPR User-Agent

For various reasons, it might be wise to have OPR use its own versioned user agent. This could be used to identify the request as coming from an OPR server and, perhaps, also communicate the code version number running on the server.

As an example:

User-Agent: OpenProductRecovery/0.6.2

Create Initial Admin Portal

In order to reduce friction for OPR use (and improve adoption), it would be very helpful to have an admin portal for an OPR instance. In an ideal world, an admin portal would include, at the very least tenant node configuration. Additional helpful features:

  1. Integration Management
  2. Server Configuration

After enrichment, we'll break this issue down into a set of mutually exclusive and collectively exhaustive smaller subtasks.

Improve transaction/locking behavior on offer model [processUpdate] [mssql]

Expected Behavior

Processing a new/valid offer set update against a SQL Server (mssql) database results in records successfully inserted into the database.

Actual Behavior

The offer set fails to save and results in a database timeout in insertOrUpdateOfferInCorpus on t.em.save(corpusOffer) inside SqlOprPersistentStorage:

image

Steps to Reproduce the Problem

  1. Configure a SqlOprPersistentStorage storage provider to use SQL Server (mssql).
  2. Attempt to process a brand new offer through processUpdate in PersistentOfferModel. For example, by using the ingest method on the provided example server.
  3. Observe the resulting timeout.

Possible Related Cause & Solution

This appears to be related to the creation of two transactions that are used in processUpdate. The first transaction is created at the top of processUpdate and the other is created about a hundred lines lower. Removing the creation of the second transaction (and consolidating both into a single transaction) appears to resolve the issue.

Specifications

  • Code Version: 163f226
  • SQL Server Version: Microsoft SQL Azure (RTM) - 12.0.2000.8 Oct 18 2022 13:24:45
  • Platform: macOS Monterey on MacBook Air (m2)

OprDatabase needs to be refactored

My first pass at OprDatabase got the primitives all wrong. It's time to go back and fix it. It contains too much logic that should be re-used across all databases.

The current OprDatabase interface should become a class called OprDatamodel. The methods on that object should be exactly the methods defined on OprDatabase today. The OprDatamodel class should take an OprDatabase as a parameter, and that object should have very different primitives.

Those primitives are:

  • createTransaction
  • commitTransaction
  • failTransaction
  • lockProducer
  • unlockProducer
  • initialize
  • rebuild
  • getOffersAtTime(transaction, orgUrl, time) => Promise<Array>
  • getOfferAtTime(transaction, orgUrl, offerId, offerPostingOrg, time) => Promise
  • getCorpus(transaction, orgUrl) => Promise<Array>
  • writeCorpus(transaction, orgUrl, Array) => Promise
  • getAllOffers(transaction) => Promise<Array>
  • getRejections(transaction, offerId, postingOrgUrl) => Promise<Array>
  • updateListings(transaction, offerId, postingOrgUrl, listings) => Promise
  • getListings(transaction, offerId, postingOrgUrl) => Promise
  • writeAccept(transaction, offerId, acceptingOrgUrl, decodedReshareChain) => Promise
  • writeReject(transaction, rejectingOrgUrl, offerId, postingOrg) => Promise
  • writeReservation(transaction, reservingOrgUrl, offerId, reservationLengthMillis) => Promise
  • getLastVisibleTimestamp(t, orgUrl, offerId, postingOrg) => Promise /** Returns the time when the offer disappears for this org */
  • getHistory(transaction, orgUrl, sinceTimestampUTC) => Promise<Array>

Add a top-level project structure test to reject new package-lock files

Because checkins to this repo require dependency hoisting, the introduction of a new package-lock file is always evidence of a problem. A new package-lock file means that two packages in this monorepo require different versions of the same dependency.

We should introduce a top-level project-structure test that enforces invariants on the repo like this one, and fails if a subproject is out of compliance.

Resolve Docker Automatic Build Issue

Expected Behavior

Automated docker build succeeds.

Actual Behavior

Automated docker build fails with an error because of a change to lerna in version 7.

Discuss/Setup NPM Release Process

It would be helpful to have an established NPM release process. Initial open questions for discussion:

  • Should we automate the NPM package release process? If so, how should it be triggered?
  • Should we have a test / prerelease NPM package version?
  • Should we adopt a versioning strategy (like https://semver.org/)

Create "synchronize" as a common, installable, endpoint

The example servers all add an "/synchronize" endpoint (which is, optionally, protected). It looks like this today:

    const synchronizeEndpoint = {
      method: ['POST', 'GET'],
      handle: async () => {
        await storageDriver.synchronize(true);
        return 'ok - db initialized';
      },
    } as CustomRequestHandler;

Given that this is commonly wanted, we should move it to the opr-core library and make it easily installable in any server.

This should be created under opr-core/src/server/installable-endpoints/synchronize.ts. Then, export the endpoint as a CustomRequestHandler.

lerna test issues

Expected Behavior

Test should pass:

Actual Behavior

Errors:

       src/json/resolver.ts(17,23): error TS2307: Cannot find module 'path' or its corresponding type declarations.
       src/json/resolver.ts(18,23): error TS2307: Cannot find module 'fast-json-patch' or its corresponding type declarations.
       src/json/resolver.ts(19,25): error TS2307: Cannot find module 'fast-json-patch' or its corresponding type declarations.
       src/json/sourcedjson.ts(17,19): error TS2307: Cannot find module 'json-to-ast' or its corresponding type declarations.
       src/json/sourcedjson.ts(18,16): error TS2307: Cannot find module 'fs' or its corresponding type declarations.
       src/json/sourcedjson.ts(132,31): error TS2339: Property 'start' does not exist on type 'Location'.
       src/json/sourcedjson.ts(132,49): error TS2339: Property 'start' does not exist on type 'Location'.
       src/json/sourcedjson.ts(852,3): error TS2322: Type 'SourcedJsonObject | SourcedJsonArray | SourcedJsonPrimitive<any> | undefined' is not assignable to type 'SourcedJsonValue'.
         Type 'undefined' is not assignable to type 'SourcedJsonValue'.
       src/json/sourcedjson.ts(1220,5): error TS2322: Type '{ isGuess: boolean | undefined; source: string; start: { line: number; column: number; offset: number; }; end: { line: number; column: number; offset: number; }; }' is not assignable to type 'Location'.
         Object literal may only specify known properties, and 'start' does not exist in type 'Location'.
       src/test/datadriventest.ts(17,22): error TS2307: Cannot find module 'chai' or its corresponding type declarations.
       src/test/datadriventest.ts(18,23): error TS7016: Could not find a declaration file for module 'glob'. '/Users/lindahl/Documents/GitHub/open-product-recovery/node_modules/glob/glob.js' implicitly has an 'any' type.
         Try `npm i --save-dev @types/glob` if it exists or add a new declaration (.d.ts) file containing `declare module 'glob';`
       src/test/datadriventest.ts(19,23): error TS2307: Cannot find module 'path' or its corresponding type declarations.
       src/test/datadriventest.ts(99,5): error TS2582: Cannot find name 'describe'. Do you need to install type definitions for a test runner? Try `npm i --save-dev @types/jest` or `npm i --save-dev @types/mocha`.
       src/test/datadriventest.ts(102,9): error TS2304: Cannot find name 'before'.
       src/test/datadriventest.ts(105,9): error TS2304: Cannot find name 'after'.
       src/test/datadriventest.ts(114,11): error TS2582: Cannot find name 'describe'. Do you need to install type definitions for a test runner? Try `npm i --save-dev @types/jest` or `npm i --save-dev @types/mocha`.
       src/test/datadriventest.ts(128,13): error TS2582: Cannot find name 'it'. Do you need to install type definitions for a test runner? Try `npm i --save-dev @types/jest` or `npm i --save-dev @types/mocha`.
       src/test/datadriventest.ts(206,7): error TS2582: Cannot find name 'describe'. Do you need to install type definitions for a test runner? Try `npm i --save-dev @types/jest` or `npm i --save-dev @types/mocha`.
       src/test/datadriventest.ts(230,5): error TS2304: Cannot find name 'before'.
       src/test/datadriventest.ts(247,5): error TS2304: Cannot find name 'beforeEach'.
       src/test/datadriventest.ts(250,5): error TS2582: Cannot find name 'it'. Do you need to install type definitions for a test runner? Try `npm i --save-dev @types/jest` or `npm i --save-dev @types/mocha`.
       src/test/datadriventest.ts(340,5): error TS2304: Cannot find name 'afterEach'.

Steps to Reproduce the Problem

  1. npx lerna run test

Improve example server container dependency loading

Currently, the example server docker image builds using the last published packages in the public package registry 1.

Under most conditions, this approach works perfectly fine. However, this arrangement inconveniences contributors attempting to use the docker image against a modified local source. Local (not yet published in the package registry) OPR package changes are not reflected in the docker image, but the latest source for the server is included.

In addition, breaking changes to the local package source (from the perspective of the example server) will cause the docker test script to fail locally and in GitHub.

Footnotes

  1. See lines 3-6 on https://github.com/google/open-product-recovery/blob/35cf411f8b939e71b7ec00cca37488bf96445c45/opr-example-server/Dockerfile.dev. โ†ฉ

Create "ingest" as a common, installable, endpoint

The example servers all add an "/ingest" endpoint (which is, optionally, protected). It looks like this today:

    const ingestEndpoint = {
      method: ['POST'],
      handle: async () => {
        const changes = [] as Array<OfferChange>;
        const changeHandler = api.registerChangeHandler(async change => {
          changes.push(change);
        });
        await s.ingest();
        changeHandler.remove();
        return changes;
      },
    } as CustomRequestHandler;

Given that this is commonly wanted, we should move it to the opr-core library and make it easily installable in any server.

This should be created under opr-core/src/server/installable-endpoints/ingest.ts. Then, export the endpoint as a CustomRequestHandler.

lerna bootstrap errors out compiling opr-core

Expected Behavior

npx lerna bootstrap would enable me to run npx lerna run test with passing tests

Actual Behavior

Looks like it is throwing an error compiling opr-core

> [email protected] prepare /Users/bill/Dropbox/0.HK/code/open-product-recovery/opr-core
> npm run compile

lerna ERR! lifecycle "prepare" errored in "opr-core", exiting 2

Steps to Reproduce the Problem

  1. Clone the repo
  2. run npx lerna bootstrap
  3. see the error

Specifications

git commit: 9e3b832

  • Version: npx: 8.12.1, node: v18.4.0, npm: 8.12.1
  • Platform: M1 MacOS 16

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.