Giter Club home page Giter Club logo

Comments (6)

lithdew avatar lithdew commented on May 30, 2024

Great to hear that! I believe it shouldn't be a problem running QuickJS inside http.Handler's, since net/http spawns up unique goroutines per request.

Nonetheless, runtime.LockOSThread() will cause a bit of a performance impact; though it should be negligible.

from quickjs.

matthewmueller avatar matthewmueller commented on May 30, 2024

Thanks so much for your response. I wrote a quick little program to benchmark it and performance seems quite great.

package main

import (
	"fmt"
	"net/http"
	"strconv"
	"time"

	"github.com/lithdew/quickjs"
)

func main() {
	http.ListenAndServe(":3000", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		one := time.Now()
		js := quickjs.NewRuntime()
		defer js.Free()
		fmt.Println(time.Since(one))
		two := time.Now()
		context := js.NewContext()
		defer context.Free()
		fmt.Println(time.Since(two))

		three := time.Now()
		result, err := context.Eval(`1 + 2 * 100 * Math.random()`)
		if err != nil {
			fmt.Println(err)
			return
		}
		defer result.Free()
		fmt.Println(time.Since(three))

		w.Write([]byte(strconv.Itoa(int(result.Int64()))))
	}))
}

Happily churns through 500/req/sec 😬

echo "GET http://localhost:3000/" | vegeta attack -duration=30s -rate=500 | tee results.bin | vegeta report
Requests      [total, rate, throughput]         15000, 500.04, 500.03
Duration      [total, attack, wait]             29.998s, 29.998s, 566.909µs
Latencies     [min, mean, 50, 90, 95, 99, max]  459.406µs, 666.774µs, 615.891µs, 869.536µs, 986.747µs, 1.035ms, 13.698ms
Bytes In      [total, mean]                     36938, 2.46
Bytes Out     [total, mean]                     0, 0.00
Success       [ratio]                           100.00%
Status Codes  [code:count]                      200:15000
Error Set:

I was a bit surprised that I needed to stick quickjs.NewRuntime() in the handler itself. Without it, I end up with an error (that's empty) in result, err := context.Eval("1 + 2 * 100 * Math.random()") after the 2nd request. Happy to open a more specific issue if you see this as a potential bug.

Also since I'm not too familiar with runtime.LockOSThread(), I wanted to try and produce a bad outcome by omitting it. So far I haven't been able to produce anything. Is the problem that is a race condition between goroutines trying to edit memory in C? Any pointers on how I can witness the runtime.LockOSThread() issue?

from quickjs.

lithdew avatar lithdew commented on May 30, 2024

The problem is that QuickJS allocates memory in the thread it's initialized within. As goroutines get scheduled and moved around to different threads, this means that a goroutine holding an instance to a quickjs.Runtime may all of a sudden realize the memory it was holding would not exist in its current running thread, causing a segmentation fault.

Hence why runtime.LockOSThread is necessary.

The issue is demonstrated in this test that was made here which then led to Guideline 5 being established: https://github.com/lithdew/quickjs/blob/master/quickjs_test.go#L130

Removing runtime.LockOSThread() in the test above will cause a segmentation fault.

Thinking about it, by the way, typically it is advised against creating a single runtime per HTTP request handled (doing so would cause possibly hundreds of runtimes being created, which are expensive to initialize).

What could be done instead is creating a worker pool of runtime.NumCPUs() goroutines locked to runtime.NumCPUs() threads with their own unique quickjs.Runtime instances. Your http.Handler's can then submit work to them to be processed.

from quickjs.

matthewmueller avatar matthewmueller commented on May 30, 2024

Ah that makes more sense. I'll give that test a whirl! I'll also bench the worker pattern and post my results here. From what I saw it was actually creating the context that took the most time:

Create Runtime: 35.067µs
Create Context: 236.605µs
Eval: 31.383µs

Thanks for taking the time to share your knowledge!

from quickjs.

matthewmueller avatar matthewmueller commented on May 30, 2024

Hey there! I wanted to check-in after I did some additional research. I was able to get pooling working with the following code:

quickjs.go
package quickjs

import (
	"context"
	"fmt"
	"runtime"

	"github.com/lithdew/quickjs"
)

// NewWorker creates a new worker
// Worker is not goroutine safe
func NewWorker() *Worker {
	runtime := quickjs.NewRuntime()
	return &Worker{runtime}
}

// Worker struct
type Worker struct {
	runtime quickjs.Runtime
}

// Eval function
func (w *Worker) Eval(ctx context.Context, js string, v interface{}) (err error) {
	context := w.runtime.NewContext()
	defer context.Free()
	result, err := context.Eval(js)
	if err != nil {
		return err
	}
	switch t := v.(type) {
	case *string: // we expect a string
		if !result.IsString() {
			return fmt.Errorf("Eval expected a string, but received %q", result)
		}
		*t = result.String()
		result.Free()
		return nil

	case *int64:
		if !result.IsNumber() {
			return fmt.Errorf("Eval expected a number, but received %q", result)
		}
		*t = result.Int64()
		result.Free()
		return nil

	case *bool:
		if !result.IsBool() {
			return fmt.Errorf("Eval expected a boolean, but received %q", result)
		}
		*t = result.Bool()
		result.Free()
		return nil
	case nil: // ignore the output
		result.Free()
		return nil
	default:
		result.Free()
		return fmt.Errorf("quickjs.Eval: unable to coerce into expected value %t", t)
	}
}

// Close the worker
func (w *Worker) Close() {
	w.runtime.Free()
}

// NewPool creates a pool of workers
func NewPool(size int) *Pool {
	requestCh := make(chan request, size)
	pool := &Pool{requestCh}
	// start all available workers
	for i := 0; i < size; i++ {
		go pool.worker(i)
	}
	return pool
}

// Pool of workers
type Pool struct {
	requestCh chan request
}

// TODO: sometimes goroutines get scheduled on the same thread.
// then they're locked to that thread. You can see this with
// -trace. Is there anyway to force the goroutines to get
// scheduled on different threads?
//
// Even better, is it possible to remove this requirement
// entirely? By cleaning up state after each run or something?
func (p *Pool) worker(id int) {
	runtime.LockOSThread()
	defer runtime.UnlockOSThread()
	w := NewWorker()
	defer w.Close()
	for req := range p.requestCh {
		err := w.Eval(req.ctx, req.js, req.result)
		req.response <- response{err}
	}
}

type request struct {
	ctx      context.Context
	js       string
	result   interface{}
	response chan response
}

type response struct {
	err error
}

// Eval the worker
func (p *Pool) Eval(ctx context.Context, js string, result interface{}) (err error) {
	response := make(chan response)
	p.requestCh <- request{ctx, js, result, response}
	res := <-response
	return res.err
}

// Close the pool
func (p *Pool) Close() {
	close(p.requestCh)
}
quickjs_test.go
package quickjs_test

import (
	"context"
	"fmt"
	"runtime"
	"testing"
	"time"

	"github.com/matthewmueller/duo/internal/quickjs"
	"github.com/tj/assert"
	"golang.org/x/sync/errgroup"
)

const slow = `
function slow(baseNumber){
	let result = 0;
  for (var i = Math.pow(10, 7); i >= 0; i--) {
		result += Math.atan(i) * Math.tan(i);
	};
	return Math.floor(result)
}
slow(10)
`

func TestWorker(t *testing.T) {
	ctx := context.Background()
	var eg errgroup.Group
	eg.Go(func() error {
		worker := quickjs.NewWorker()
		defer worker.Close()
		var result int64
		now := time.Now()
		err := worker.Eval(ctx, slow, &result)
		assert.NoError(t, err)
		fmt.Println(result)
		fmt.Println(time.Since(now))
		var result2 int64
		now2 := time.Now()
		err2 := worker.Eval(ctx, slow, &result2)
		assert.NoError(t, err2)
		fmt.Println(time.Since(now2))
		return nil
	})
	eg.Wait()
	// assert.Equal(t, "11", v)
}

const slow = `
function slow(baseNumber){
	let result = 0;
  for (var i = Math.pow(10, 7); i >= 0; i--) {
		result += Math.atan(i) * Math.tan(i);
	};
	return Math.floor(result)
}
slow(10)
`

func TestPool(t *testing.T) {
	ctx := context.Background()
	pool := quickjs.NewPool(runtime.NumCPU())
	defer pool.Close()
	var eg errgroup.Group
	for i := 0; i < 204; i++ {
		eg.Go(func() error {
			var result int64
			err := pool.Eval(ctx, slow, &result)
			assert.NoError(t, err)
			assert.Equal(t, int64(-2898551), result)
			return nil
		})
	}
	assert.NoError(t, eg.Wait())
}

Overall, it's working quite fast, but I tested it with some CPU intensive tasks and I noticed that by using runtime.LockOSThread(), the goroutines may get stuck on the same thread. You lose a lot of concurrency if this is the case.

I was wondering:

  1. Is it possible to remove this requirement? Perhaps by cleaning after each run?
  2. Do you know how to schedule goroutines on a specific thread or tell them to run on different threads?
  3. Any other workarounds you can think of to get uniform parallelism?

Thanks!

from quickjs.

lithdew avatar lithdew commented on May 30, 2024

Hm, the only way you'd be able to circumvent goroutines being stuck on the same thread is by manually setting the goroutine's thread affinity using cgo.

http://pythonwise.blogspot.com/2019/03/cpu-affinity-in-go.html

The alternative is to also manually manage a thread pool over cgo.

from quickjs.

Related Issues (4)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.