Giter Club home page Giter Club logo

bark's People

Contributors

akhilss99 avatar asishshaji avatar bhagatvansh avatar deepto98 avatar harikrishna-28 avatar itishrishikesh avatar jayashankarjayan avatar neelkanthsingh avatar npmaile avatar pektezol avatar ravan0407 avatar theankitbhardwaj avatar vaibhav-kaushal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

bark's Issues

Option to print or disable Debug level logs in the client

When we are in production, we typically don't need the debug level logs. The client should allow us to either enable or disable the debug level logs for being sent from the client to whatever is the destination (server, stdout, file etc.).

LogManager (or LogClient)

Develop a LogManager (or LogClient) package that can be used by an end user (any Go developer).

Current Bark Issues

  • Issue 1: Allow initialization (or configuration) of a logger with options for specifying a service name, session ID, and database connection string.
  • Issue 2: Implement different logging methods such as 'info,' 'warn,', and 'debug', etc. to handle various log levels.

Some relevant examples from other libraries that we can consider.

This is how a developer uses Log4J (in Java) -

  • The logger object is typically initialized at the top of the class:
  • static Logger logObj = Logger.getLogger(Random.class.getName());
  • Then we use logObj to write logs to the console or standard output.

This is how a developer uses Zap (in Go) -

  • The logger object is usually initialized at the top of the method:
  • logger, _ := zap.NewProduction()
  • Then we use logger's various methods to write the logs to the console or standard output.

Both of these libraries then have multiple methods to write various log levels.

Rename the field `session_name` to `service_instance_name`

The field session_name in the database (and the corresponding struct fields in the code) actually refer to the instance of the service that sent the log message to the server. The current name session_name is ambiguous, especially when we are talking about user login session and similar stuff. We should rename it to something which makes more sense, e.g. service_instance_name would be a good choice.

FEAT : Add webhook support to bark client

NOTE: A new issue (#67) was created to better capture the needs according to the existing structure.

Just like we have .error .info .warn, we can also have a .webhook that will insert a log to the database and also call the webhook url with that log. You can find more information about webhooks here

These can be used when a critical part of your application fails. Instead of just logging to the database, you can also call a webhook to notify you immediately or run an automation.

The function may look something like this:

func (c *Config) Webhook(webhookURL, logMsg string) {
	// ..logic
}

Batch Inserts

Currently, the project supports single inserts into the DB through endpoints. However, a set of logs needs to be appended in the most optimized way possible through batch inserts such as during graceful exits, multiple inserts, etc.

Add connection pool

Adding a connection pool to reduce the overhead of opening and closing a connection. Database connections are cached so that any future request can reuse an existing connection.

no response body is returned for any http calls

We get an empty response body while trying to insert a log.

image

Here's the curl for the above request,

curl --request POST \
  --url http://127.0.0.1:8080/insertSingle \
  --data '{
    "id": -100,
    "logLevel": "info",
    "serviceName": "TestPostman",
    "sessionName": "testSession",
    "code": "1KE0H8",
    "msg": "Shows up in DB or not?",
    "moreData": {
        "name":"vaibhav"
    }
}'

This is true for all endpoints we have right now.

Refactor Implementation to Replace Reflect API with SQLX for Improved Performance and Maintainability

The current implementation of the project relies on the use of the Reflect API, which is considered suboptimal due to its impact on performance and maintainability. To enhance the overall codebase, this issue proposes refactoring the existing code to replace the Reflect API with SQLX, a more efficient and structurally sound approach that offers significant improvements in terms of both performance and code maintainability.

Add function to allow user to set json in more_data while logging

The MoreData member of BarkLog takes a json as value. We need to create a functionality to allow user to set this to any value of the user's choice. By default, this will be empty.

Something like:
more_data:
{
errorFileName: "",
errorFunctionName:"",
errorLine:
}

Or anything relevant to the user's code base this is just a random example.

Improve the insert performance in the server

We are performing way too many allocations in the StartWritingLogs function in the server log writer file. We need to optimize that so that high bursts of incoming log traffic could be ingested. Things we can try:

  1. We can move the code in the BarkLogDao.InsertBatch function to the StartWritingLogs function - that should avoid the memory copy required for making the function call and the memory allocations happening inside the InsertBatch function.
  2. We can allocate a variable for large, medium and small batches. These should be arrays instead of slices so that the loop does not have to reallocate the slice on append.

Unable to import into other projects

Problem: Right now Bark exposes a simple HTTP endpoint to insert data into the database, but doesn't have a package that can be imported into another project to be used directly.

Proposed behavior: To include a package logger which exposes a method GetLogger, which can be used to obtain a logger function that can be readily used in other projects just by importing the logger package.

Example:

log:= logger.GetLogger(serviceName)

Then log is used like this,

bark(INT_LEVEL, "code", "message",  some_json_data)

Benchmarking the SQL connectors

In the world of PostgreSQL, we have two very famous connectors:

  1. pq - which is used by the famous sqlx library by default.
  2. pgx - which is also pretty famous and it is famous for being fast!

Now, bark is supposed to be fast. So we need to know which one is fast and by what margin. There are other questions around PostgreSQL connections, pooling, support for goroutines, error handling etc. but for now we need to just check the performance difference between the two libraries!

Implement Webhooks for ALERT level logs

A webhook is a function that was to be supplied by the user. The usage ties to the ALERT log level. So that when the client (and maybe someday the server too) encounters an ALERT level log, it will call the supplied webhook function. The function of this Webhook is to call an external service, primarily for alerting the user of a near-fatal event, something that needs to be looked into urgently.

Now, different people have different needs. My alerting mechanism might be to send the event to slack while another person or team might want to send alert to PagerDuty while someone else might want to trigger am email to their SRE team or whatever. We can't determine what the user will do.

So the best we can do is to leave the option open - let the user write the function that does what they need. The authentication mechanism, the method to call (HTTP/GRPC/send to another channel etc.) is open to their choices. Hence that choice.

That is the reason the Webhook types was defined (and is currently commented) in the client code as:

type webhook func(models.BarkLog) error

and was added to the Client struct type as:

AlertWebhook webhook

and the user function to hook their method to the client was defined like this:

func (c *Config) SetAlertWebhook(f webhook) {
	c.AlertWebhook = f
}

We need to write the mechanism to call this function when a Alert level log is recieved.

Setup Github Actions pipeline for testing

We need to set up GitHub Actions for automated testing in our repository to ensure code quality and reliability. Automated testing is a crucial part of our development workflow and will help catch issues early, reduce manual testing efforts, and maintain the overall health of our project.

Prefix env var DATABASE_URL

The name of this variable might easily clash with env vars of other projects. The goal of this task is to prefix that variable with BARK_

Writing logs to plain text file

Default flow of bark after collecting logs is to send it to send to app_log table. However, if the user requires the logs to be written to a file, they should be able to configure it.

Using a simple variable while creating logger, we can ask the user input. WriteToDB ? Yes/No.

If it is no, then the logs after being sent to the channel, will be written to a .txt file.

Write a Log file parser for Uber Zap

We should write an injector which will be able to read a log file that was outputted by Uber's zap library (running in Production mode) and input the logs into barks database.

Create the releaser script

We need to create a new script to release new versions. It should:

Tag and Push

Tag the latest commit on the main branch with the new version number and push it to github

Compile new binaries

For:

  1. Linux, x86_64
  2. Linux, ARM64
  3. macOS, ARM64
  4. macOS, x86_64

The output name for each binary should reflect the platform and

Create docker image

For:

  1. Linux, x86_64
  2. Linux, ARM64

The docker images are supposed to have the right tag (same as the version number)

Then it should be able to tag the builds and push to Docker Hub!

Create a Sample Application Using Bark and Bark Client

We need to create a demo application that will showcase the capability of Bark. The application may highlight the following features:

  • Performance: Demonstrate the logging capabilities of Bark, showcasing its ability to handle a large volume of logs without impacting application performance.
  • Ease of use: Show how simple and straightforward it is to integrate Bark into an application and configure it to log messages.
  • Fallback mechanism: Illustrate how Bark handles logging failures by implementing a fallback mechanism, such as using uber-go/zap or posting to stdout

This demo application will serve as a powerful example for developers, demonstrating the benefits and advantages of using Bark as a logging library and utilizing PostgreSQL for log storage.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.