Giter Club home page Giter Club logo

core's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

core's Issues

Add tests for database endpoints

Create a series of test for each database endpoint, including the sudo version.

This will require a working mongodb instance available to run the tests.

[core & cli] Save dev database between CLI restart

In development mode the CLI uses the memory database provider, making everything versatile.

It could be helpful to have an option to save the memory database from a flag on the CLI server command.

Overview for this change

In the core package:

Two function could be added to the memory/memory.go file to load and save the database.

The Memory type in memory.go:31 contains a map[string]map[string][]byte (ho yeah, I did that). This is basically what's used as the in-memory database.

The two new functions in this file could be (m *Memory) Load(filename string) error and (m *Memory) Save(filename string) error. We could use gob to encode/decode this DB to the file.

Ideally, I'd not want to have the Persister interface to have those two function. Maybe we could create a new interface with those 2 and check when it's time to call those function if the runtime implementation has those (pseudo code):

type DBReaderWriter interface {
  Load(string) error
  Save(string) error
}

p.s. writing this makes me wonder if the Memory type could not juste implement io.Reader and io.Writer instead of having our own interface.

Nonetheless, somewhere would be this:

rdr, ok := db.(memory.DBReaderWriter)
if ok {
  rdr.Load(filename)
}

in the cli program:

  • In the server.go file add a command parameter (like the Port), say "filename"
  • In the cofig we could pass this filename in there so in the core server.go when the DB is instanciated we could have the Load and Save (from a defer called.

This is an idea of how this could be implemented. Open to any suggestion, do not hesitate to speak your idea/suggestion in here.

Embed UI templates via go:embed

By implementing the backend package I discover that the //go:embed directive works when importing external package.

Since the CLI uses the core package to expose a fully working local dev server, it would be nice to embed the templates for the web admin UI.

That way they'd be available from the CLI.

We'd use them like this in the render.go file:

//go:embed templates
var templFS embed.FS

Once this is done, the loadTemplates in render.go line:20 should be changed to use the templFS above. Use the the io/fs instead of os to ReadDir.

Add magic link/code authentication

In some use case a magic link via email or code via text message is preferred as authentication mechanism.

Let's implement this the simple way with Twillio for text message and the built-in email functionality of SB.

Expose an HTTP client to the function runtime

The server-side functions run inside a custom JavaScript runtime. It only support bare JavaScript and has additional functionalities added via the VM.

One useful function would be to expose a way to perform HTTP requests from the function.

See function/runtime.go line:110.

Adding a helper function like fetch to keep "consistent" with the JS eco-system. Or something like webreq

I can see the function taking the following as parameters:

// example from the function code PoV
const resp = webreq("POST", "https://myurl.com", {hello: "world"}, {"MY-HEADER-KEY": "something"});

This would accept the HTTP Method, URL, Body as JSON, Header as a map.

And this helper function would return:

type HTTPResponse struct {
  Status int `json:"status"
  Body string `json:"body"
}

Create web UI for file storage mgmt

A simple web UI to upload, view and delete files.

Uploading and deleting should trigger a message since if the user was using that file somewhere they might want to update the document(s) where it's used.

Using environment variables from .env file under vscode

Is your feature request related to a problem? Please describe.
go code reads os(shell) environment variables via os.Getenv("name") API.
In order to get values for development saved in .env file, path to this file should be added to json config files of IDE. Cumbersome process.

Describe the solution you'd like

Use new config.Getenv("name")

  1. [Once] Read .env file and inject values to environment (using godotenv )
  2. call os.Getenv

Describe alternatives you've considered
Add env variables to .vscode/launch.json and save it in repository
something like:

            "env": {
                "DEV_ENV": "true",
                "DOCKER_ENV": "true",
            }

Additional context Example of godotenv usage

[Refactoring] Add helper functions process

Current code for adding helper functions

	if err := env.addHelpers(vm); err != nil {
		return err
	}
	if err := env.addDatabaseFunctions(vm); err != nil {
		return err
	}
	if err := env.addVolatileFunctions(vm); err != nil {
		return err
	}
	if err := env.addSearch(vm); err != nil {
		return err
	}
	if err := env.addSendMail(vm); err != nil {
		return err
	}

Helper functions may be moved to separated package and added to functions registry via init()

Within Execute - add registered functions

Core code will be more clear

Also it will allow to add custom helper functions

Import a package from Go that embeds core's features without HTTP calls

Importing the core's functionalities inside a Go program by importing a package that wraps the internals vs. using the Go client library would prevent having to host the backend API separately from the main Go web application.

This will require exploration and experimentation to see if it's possible and what kind of changes is required. There could not have any breaking changes in an ideal world.

The idea would be to create a package that will be the one imported which is similar to the Go client library but instead of calling the API via HTTP requests, it will call the internals directly.

The requirements of Redis and either PostgreSQL or Mongo will still stand, but it would remove the need to host the backend API separately.

[refactor]: Remove ShiftPath and replace with getURLPart

For simplicity the ShiftPath should be removed and all its usage should be replace with the getURLPart.

ShiftPath is takiing the request path and return the last value of the URL and return a modified shrinked version:

// URL: /a/b/c
r.URL.Path, c := ShiftPath(r.URL.Path) // c = "c"
r.URL.Path, b := ShiftPath(r.URL.Path) // b = "b"

The simpler getURLPart function gets an URL value by index:

// URL: /a/b/c
c := getURLPart(r.URL.Path, 3) // notice that indexes starts at 1, not 0. a is at idx 1, b at 2 and c at 3

I'd start this refactor by removing the ShiftPath function from url.go. In fact, I'd delete the entire url.go file and go build to fix compile error.

Before replacing a call to ShiftPath we need to make sure we know the proper index we want to use with the getURLPart.

For this, reverting to the URL in server.go for this handler would be how I'd ensure.

Add / remove user from account

Eventhough the account model is Account->Users there's no way to add and remove users from an account.

The adding part should be straightforward, but removing a user poses the question of what happen with the documents created by that user. The default permission is account's users can read, but cannot modify docs created by other users in their account.

I suppose this is a responsibility of the application to handle this via permissions that would allow other users from same account to modify/delete document they haven't created.

For the v1.5, only adding and removing will be added and depending on the feedback, StaticBackend will maybe bulk updated the owner of document when deleting. TBD.

Add the PublishDocument event trigger after the bulk update

Need to add the PublishDocument trigger to the updateDocuments functions located in database/postgresql/base.go:265 and database/mongo/base.go:369

It will require a new function, something that can grab multiple documents by ID to publish the changed documents they'll need to be fetched, to avoid the usage of the GetByID in a loop.

Create a dedicated accounts/users page in the web UI to view/manage users

Since 5351407 the web UI database page is not displaying the system collections, the one prefixed with sb_ owned by StaticBackend.

In #26 there were an item regarding adding a dedicated page for user management. This is its dedicated issue.

I'd see a simple table listing the sb_accounts and when clicking on an account we would see its users from sb_tokens either directly in the same page or in a new page for this account.

There's currently no way to list accounts and users from an account. Here's how I'd approach this:

  1. In persistor.go I'd add two new functions in the // system user account function s section. ListAccounts and ListUsers
  2. An implementation for the postgresql, mongo, and memory providers will be required.
  3. In the ui.go create two new hanglers like listAccounts and listUsers. The listAccounts does not need much parameters, but listUser will require an accountId.
  4. Add the two route in server.go line:227 near webUI := ui{} block of routes.

Additional thoughts.

  • Could be nice to be able to search by email for an account / user in that page.
  • Ideally, the account would be sorted by last created.

Server-side JavaScript runtime to run functions

A user should be able to run pieces of code (function) triggered by a message (live reaction to system event or a schedule task).

For the v1, the function will be executed inside a ~custom JavaScript runtime and will be sandboxed.

The majority of SB functionalities should be expose to the runtime:

  • Database CRUD operations
  • File storage (read / write)
  • Publishing message to the message queue

Create abstraction for file storage

To remove the hard dependencies on AWS S3 for file storage and Cloudfront for CDN we'll need to create an abstraction for the file storage functions.

It will be the responsibility of the user to provide the URL (CDN or not) to access the files and different implementation for different providers will be easier for the future for non-AWS users.

Clean-up backend package test

Since the merge of #88 the make alltest is now failing due to the way the search package seems to have conflict opening the search index file while tests are running in parallel / concurrently.

Bottom line is that the Setup function of the backend package should not instantiate the search.Search struct by calling its New function.

Not extremely clear what should be the right course of action from here. I believe the tests and the entire dependency will need some more thinking to prevent that kind of issues.

Implement the in and !in query operators for PostgreSQL

The query operators in and !in which are suppose to look for presence or absence of an item in a list isn't implemented yet in the PostgreSQL database package.

In database/postgresql/query.go line:37, an implementation is required for these query operators to work.

As reference to find the way to implement for PostgreSQL's JSONB format: https://www.postgresql.org/docs/9.5/functions-json.html

This change once implemented can be tested via a test in the database/postgresql package using the QueryDocuments function for instance.

feat: add logger to have better experience with work on app

Is your feature request related to a problem? Please describe.
As a new contributor, I can say that it's really hard to develop the application without logging because you can get some errors, and "fmt.Println" doesn't give all information about an error. And sometimes he can't print it.

Describe the solution you'd like
We can use some modern, custom logger ex: "zap", "logrus" or "zerolog" which will write log's into file and console.
It will more comfortable then use default "fmt.Println" or "log.Println")

I see it as a structure that will be passed to other functions as an argument.
This structure will configurable with "LOG_LEVEL" or we can do it with "APP_ENV".
For ex: if "APP_ENV" is equal to dev we will log messages from all levels but if it equal to prod we will log only error or critical levels)

Additional context
People from Discord say that we should use "zerolog" and how I seen it looks not bad.
How do you think should we implement it or not?

Add documentation for server-side functions

Create the documentation (docs, video, blog) that detail the new component that can be used to perform server-side actions on the database and other StaticBackend ressources

Expose the Volatilizer interface functions to the server-side runtime

There's missing functions from a server-side function PoV from the internal.Volatilizer (Cache / PubSub) interface that would be useful to have in functions.

The custom server-side runtime is implemented in the function package in the runtime.go file.

At this moment the runtime adds functions that are made available from the function's JavaScript interpreter.

In the function addVolatileFunctions in runtime.go line:391 we would need to add the following functions from the interface.

Look at the send function for an example

cacheGet and cacheSet would wrap the interface's GetType and SetType functions which will handle passing string as well as any object.

inc and dec would map to the interface's Inc and Dec functions

Lastly queueWork maps to the interface's QueueWork function

Adding those runtime functions will enable a world of possibility from server-side function, even though they run inside a sandboxed runtime.

Local/dev Cache implementation missing pub/sub

The CLI and potentially devs using the core package with the local cache instead of Redis need to have the pub/sub aspect of the cache module implemented.

See cache/dev.go line:63 and line:67 the Subscribe and the Publish functions.

The pub/sub aspect of the Cache module is used for publishing events (system or user) and subscribing to execute code, for instance, a server-side function reacting to a system published events for a database document updated.

The goal of the dev cache implementation is to not rely on external service like Redis. This implementation will need to be memory based and should use the channel received in the Subscribe function to notify the subscribed clients about published events.

Here's a high level of what could be done:

  1. It requires a way to store subscribers. Meaning storing their channels to send data and close their subscription.
  2. Since subscriptions are channel/topic based, a map[string]PrivateStruct could work where PrivateStruct could include both channels.
  3. In the Publish function, we could grab all subscribers to the posted channel and send them the message.

Adding tests might be a little bit challenging as this involves Go channels, but it would be important to have them.

Having test for the Volatilizer interface would also tests the Redis implementation, which is not tested at the moment.

Add ability to update and delete many entries by criteria

This will require 2 new database endpoints.

This should take the same query logic as the query function in db.go:167 and use the UpdateMany and DeleteMany functions respectively to update or delete more than one entries based on a filter criteria.

Implement a full-text search capability

A full-text catalog is a useful feature for most application.

This is a proposal of how I'd see this feature being added.

Overall concept

To keep things simple I think SB would handle 1 reserve table/collection that supports FTS. Its schema could be similar to this:

Name: sb_fts (this would be on all tenant database)

id:string auto id
accountId:string the account owner
referenceId:string the id this item refers to
searchField:string the fts content to search in
previewData: JSON representation of a tiny "view" of the referenced object

An database would have only one table to perform full-text search.

The full-text index would index what's inside the searchField

The caller would receive a previewData list of all matches. This is useful since most full-text search output to a search results usually. Having a quick way to display the important data for matches is handy.

The referenceId is the id of the referenced entity. Continuing our analogy of a search result, one could build a URL from this ID to load the entire entity.

The "how"

PostgreSQL and MongoDB support full-text search natively. If SQLite is implemented (#63 ) it's also supported in SQLite.

The memory database impl would most certainly not offer FTS. Or if time allow, we could leverage an in-memory text search. TBD.

We could have a simple Search function in the Persister interface. Each database provider would implement their own version.

We'd also require a function like IndexContent (name to be refine) in the Persister interface.

An example of those function prototypes could be:

// in model
type FullTextData struct {
  ID string `json:"id"`
  AccountID string `json:"accountId"
  ReferenceID string `json:"refId"
  SearchField string `json:"searchField"
  PreviewData map[string]any `json:"previewData"
}
func Search(auth model.Auth, search string) ([]FullTextData, error)
func IndexContent(auth model.Auth, data FullTextData) error

To be careful

Since the sb_fts hold preview data of real entities in other table/collection they'd need to reflect updated values and be removed when the main entity is deleted.

At this moment, I'm not certain if this would be the responsibility of the dev or SB.

The deverloper could listen to database events and create functions that react to updates and deletes and apply the desire changes to the fts table/collection.

If it's SB's responsibility, I don't see how it can know about the schema the previewData should have.

Some ideas:

  1. Maybe it can perform a get by id and updates the previewData map with matching keys from the updated document. This remove SB from having to know anything about user's data.

For now, that's the only way that comes to mind. TBD.

API endpoint

This could be a reserved word like /db/fts this means that a user could never have a table/collection called fts, maybe could be sbfts. TBD.

This would need to be added to all client library as well as the backend package for Go devs using SB directly.

Deploying those changes

This will most certainly require a new SQL migration for PostgreSQL. And since the sb_fts table is defined in "user land", this migration would need to add it for all existing database (PG schema).

This will be the first time a changes need to update all users' databases. This will require testing before going into production.

Error when list/query for not created tables

Describe the bug
The creation of tables occur on the first insertion (call to CreateDocument function.

For this reason, code that tries to get records from a table that did not had a created document will return an error.

To Reproduce

  1. Start the CLI $ backend server
  2. Issue a ListDocument request
$ curl -H "SB-PUBLIC-KEY: dev-memory-pk" -X POST -d '{"email": "[email protected]", "password": "devpw1234"}' http://localhost:8099/login
$ curl -H "SB-PUBLIC-KEY: dev-memory-pk" -H "Authorization: Bearer session-token-you-got-above" http://localhost:8099/db/new-table-here

Error returned:

collection not found

Expected behavior

Might be better to return an empty slice instead of an error. The "standard" web CRUD flow is to create the "listing" page before users go to create entity.

Output of staticbackend -v

StaticBackend version v1.4.0-rc2-13-gb392f7a | 2022-08-23.05:33:15 (b392f7a)

Additional context

This should be the same behavior for all 3 database package: postgresql, mongo, and memory.

Add Count functions in database

Sometimes, it's useful to have a count of how many records there is in a collection with or without filters.

Adding a Count functions in all 3 database engine supported would involves the following:

  1. Adding the function in the Persister interface in the database package
  2. Implement it for the postgresql, mongo, and memory database engine
  3. Add an endpoint in the backend API

Depending if you write your tests first or not, but having a tests in the main package as well as inside the different db engine ones would be nice.

Since the return type would be int64, error there's no need to implement a generic function in the backend's Database[T] functions set.

I'd see the following function prototype:

// Count returns the numbers of entries in a collection based on optional filters
Count(auth model.Auth, dbName, col string, filters map[string]interface{})` (int64, error)

The filters would be optional. See how the UpdateMany function also uses similar filters

For the endpoint, we could have a new route in server.go around the routes starting with /db and have a new one like /db/count/ which would call an handler defined in db.go that would accept POST request with the optional filters (look in db.go for example of the UpdateMany).

p.s. the typical database security apply, so the standard permission checks would happen before the optional filters

Support URL-like database DSN

At this moment the database connection string is specified via the environment variable DATABASE_URL.

It would be nice to be able to specify URL-like database DSN using format like: postgresql://user:pass@localhost/dbname/?sslmode=disable.

Ideally the goal would be to keep only one env variable DATABASE_URL and detect if it's using an URL-like or the database provider's connection string format.

Library: https://github.com/xo/dburl

Manage server-side functions

For the v1 users will manage their functions via the CLI.

Uploading and deleting function for instance:

$> backend function add --trigger-topic=trials_expire ./functions/trials_expire.js

Only .js JavaScript file will be handled for now.

Expose Redis pub/sub as message queue

To start triggering events (internally and externally) we need to implement an events / message queue system.

The Redis pub/sub will be used and publishers will be:

  1. The backend itself will be able to publish system events
  2. The client-side or server-side libraries will be able to publish message

Implement a SQLite provider

A sqlite package in the database would allow people to have the full experience without having to install any external database server engine. Couple with the memory Cache implementation, this would make StaticBackend single binary fully functional without any external dependencies.

This addition involves implementing the database.Persister interface in a new sqlite package.

Once it's done, we'd need to add the SQLite database open in the backend package by handling the two value for the config's DatabaseURL and DataStore. DataStore of sqlite for SQLite.

Since SQLite isn't allowing multiple schema, replicating the multi-tenant aspect of PostgreSQL and Mongo could be done by prefixing the table names.

I'd assume that someone using SQLite would most certainly not handle multi-tenant in their application.

Since SQLite 3.9 is possible to handle JSON (inserting and querying), so the implementation would replicate more of less PostgreSQL's.

Create abstraction for sending email

We need to remove the hard dependencies on AWS SES for sending email.

Creating an interface and have a way to receive different provider implementation in the future will be more friendly to non-AWS users.

Improve the admin web dashboard UI

The admin web dashboard accessible at https://localhost:8099 is still a WIP.

Here are some aspects that could be improved:

  1. The database collection's listing of data is not optimal.
  2. The database editing only support 1 field at at time and only (string, numbers and boolean)
  3. We cannot delete en entry from a database collection.
  4. The files tab is not done. This should be a file manager like (listing, viewing, and deleting files)
  5. The forms tab is not done, this should display all forms and all posted data in each (list, view, delete)
  6. It would be nicer to have a dedicated users tab displaying the accounts and users info (sb_accounts and sb_users collection)

[backend-js] Build an additional bundle importable via <script> tag

The JavaScript client library currently only build a bundle that can be imported, for instance in a React application via:

import { Backend } from "@staticbackend/js";

It could be nice to have an option to directly use the library via a <script> tag in vanilla JavaScript, a bit like Alpine.js or htmx:

<script src="/some/path/backend.js"></script>

I'm not certain if it's possible, I think it's what is called a IIFE. Do we need to update the build system.

In short, I could use the help of a more experienced front-end developer with everything bundling related.

[cli] Add a tail command to follow server logs

Could be helpful to have a way to display HTTP requests info.

In the CLI program we could add a command line parameter to the server command, say --tail which would display the HTTP requests being handled by the server.

In the core package, we currently do not have a log middleware. We'd need one and use the logger to ourput nice output, but only if we used the --tail option.

This means that in the config we would need to add a new option to indicate if we want to display the server log or not.

The log middleware would be added to all middleware chains only if that config flag is true.

What I mean by middleware chains can be found in server.go:117:

stdAuth := []middleware.Middleware{
  middleware.Cors(),
  middleware.WithDB(backend.DB, backend.Cache, getStripePortalURL),
  .// ...
}

Since they are this function receive the config, the log middleware could accept a flag to be verbose or not, ex:

stdAuth := []middleware.Middleware{
  middleware.Log(config.TailRequests),
  middleware.Cors(),
  middleware.WithDB(backend.DB, backend.Cache, getStripePortalURL),
  // ...
}

Assuming the config flag would be called TailRequests and the log middleware Log.

If TailRequests is false the middleware simply do nothing.

review quick start for backend client library (go doc)

Describe the bug
When following the quick-start for the importing and using as a client library, I noticed that when calling backend.Setup(cfg) it wasn't picking up the RedisURL I had set.

// backend.Setup(cfg)

Turns out that backend.Setup(cfg) eventually sets up the cache by looking at config.Current

if uri := config.Current.RedisURL; len(uri) > 0 {

as a workaround I refactored the getting started code to pass backend.Setup(config.Current) instead, something like this:

config.Current := config.LoadConfig() // first get from environment and then override as needed
config.Current.AppEnv = "dev"
config.Current.DataStore = "PostgreSQL"
config.Current.DatabaseURL = "postgres://user:password@localhost:5432/postgres?sslmode=disabled"
config.Current.RedisURL = "redis://localhost:6379"
config.Current.LocalStorageURL = "http://localhost:8099"
backend.Setup(config.Current)

Add tests for authentication

Add tests for authentication (login / register / auto middleware).

This will require a mongodb instance available.

Create a scheduler to execute scheduled tasks

Via the CLI it will be possible to set schedule tasks that runs at specifics interval.

An interval could be taken from the cron format.

A task is mostly just a message that's publish to the message queue and either the backend/system process it or a custom function created by the user.

System event could be posting a webhook to a specific URL for instance.

The goal of those schedule tasks is to be able to let the user create things like:

  • Trial expiration tasks (email, updating a document, etc)
  • A monthly calculation or clean-up (a user might want to run a function each month to do X)

Schedule task improvments

This feature isn't easy to discover and use. Lacking documentation, still a bit rough around the edges on how things get executed etc.

For the v1.5, this feature will get some major improvements:

  • Log (info) so while developing it's clear what's going on
  • Add a way to add tasks at runtime, so a production running instance can start processing new tasks without restarting
  • Create documentation, examples and videos
  • Tests all 3 different supported task types (call a function, publish a message to pubsub topic, make an HTTP call)

Replace the PubSuber interface with the Volatilizer interface

There's a duplication in those two interfaces. Both are in the internal package.

The Volatilizer interface is the one to keep and is backward compatible with the PubSuber one.

Tasks:

  1. Remove the internal/pubsuber.go file
  2. Replace the internal.PubSubber with internal.Volatilizer
  3. Run tests

Add email interface to server-side function runtime

We need to add the ability to send email from server-side function.

In the runtime.go file of package function we could add the Emailer:

  • We could add the Emailer interface in the ExecutionEnvironment at line:20.
  • We would need to pass the implementation when calling the Execute function.
  • We would need to expose an email() function to the function runtime, for instance at line:114 in the addHelpers function, we could expose the email function accepting what needs to satify the Send function of Emailer.

Database provider tests clean-up

There's currently 3 implementations of the Persister database/persister.go interface, which includes all database functions.

Each packages have the same (99%) identical tests code. For instance, cecking the number of lines in the memory's base.go file vs. the mongo's one:

$ wc -l database/memory/base_test.go
406 database/memory/base_test.go
$ wc -l database/mongo/base_test.go
406 database/mongo/base_test.go

Brain storming

  1. Can we remove all this duplication and have one set of test that can tests all implementation?
  2. At the moment I created some entry in the Makefile to target specific database engine implementation, this is useful when implementing a new feature to only run tests for the currently develop provider. I would not want to lose this after no. 1.

Problem when i try run docker build command

Describe the bug
Problem to build the project using Docker on machine using Windows with WSL 2.

To Reproduce
Using the machine running Windows using Docker with WSL 2 ( with Ubuntu ).

  • Clone the project.
  • Copy .demo.env to .env
  • Execute the docker build command: docker build . -t staticbackend:latest
  • See the error

image

Fix TestHtmlToPDF occasional issue

Describe the bug
GitHub actions sometimes failed due to this test sometimes raising an error. I've seen it happened once in development as well.

error line:128: context deadline exceeded

Lines in question:

if resp.StatusCode > 299 {
  b, err := io.ReadAll(resp.Body)
  if err != nil {
    t.Fatal(err)
  }
  t.Log(string(b))
  t.Errorf("expected status 200 got %s", resp.Status)
}

To Reproduce

  1. First remove the t.Skip in this test in extras_test.go line: 116
  2. Running this test multiple times in a row should trigger the issue:
$ make thistest TestHtmlToPDF

Expected behavior
This should always work and not trigger this error

Output of staticbackend -v
This flag isn't implement yet.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.