Giter Club home page Giter Club logo

bigcache's Introduction

BigCache Build Status Coverage Status GoDoc Go Report Card

Fast, concurrent, evicting in-memory cache written to keep big number of entries without impact on performance. BigCache keeps entries on heap but omits GC for them. To achieve that, operations on byte slices take place, therefore entries (de)serialization in front of the cache will be needed in most use cases.

Requires Go 1.12 or newer.

Usage

Simple initialization

import (
	"fmt"
	"context"
	"github.com/allegro/bigcache/v3"
)

cache, _ := bigcache.New(context.Background(), bigcache.DefaultConfig(10 * time.Minute))

cache.Set("my-unique-key", []byte("value"))

entry, _ := cache.Get("my-unique-key")
fmt.Println(string(entry))

Custom initialization

When cache load can be predicted in advance then it is better to use custom initialization because additional memory allocation can be avoided in that way.

import (
	"log"

	"github.com/allegro/bigcache/v3"
)

config := bigcache.Config {
		// number of shards (must be a power of 2)
		Shards: 1024,

		// time after which entry can be evicted
		LifeWindow: 10 * time.Minute,

		// Interval between removing expired entries (clean up).
		// If set to <= 0 then no action is performed.
		// Setting to < 1 second is counterproductive — bigcache has a one second resolution.
		CleanWindow: 5 * time.Minute,

		// rps * lifeWindow, used only in initial memory allocation
		MaxEntriesInWindow: 1000 * 10 * 60,

		// max entry size in bytes, used only in initial memory allocation
		MaxEntrySize: 500,

		// prints information about additional memory allocation
		Verbose: true,

		// cache will not allocate more memory than this limit, value in MB
		// if value is reached then the oldest entries can be overridden for the new ones
		// 0 value means no size limit
		HardMaxCacheSize: 8192,

		// callback fired when the oldest entry is removed because of its expiration time or no space left
		// for the new entry, or because delete was called. A bitmask representing the reason will be returned.
		// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.
		OnRemove: nil,

		// OnRemoveWithReason is a callback fired when the oldest entry is removed because of its expiration time or no space left
		// for the new entry, or because delete was called. A constant representing the reason will be passed through.
		// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.
		// Ignored if OnRemove is specified.
		OnRemoveWithReason: nil,
	}

cache, initErr := bigcache.New(context.Background(), config)
if initErr != nil {
	log.Fatal(initErr)
}

cache.Set("my-unique-key", []byte("value"))

if entry, err := cache.Get("my-unique-key"); err == nil {
	fmt.Println(string(entry))
}

LifeWindow & CleanWindow

  1. LifeWindow is a time. After that time, an entry can be called dead but not deleted.

  2. CleanWindow is a time. After that time, all the dead entries will be deleted, but not the entries that still have life.

Three caches were compared: bigcache, freecache and map. Benchmark tests were made using an i7-6700K CPU @ 4.00GHz with 32GB of RAM on Ubuntu 18.04 LTS (5.2.12-050212-generic).

Benchmarks source code can be found here

Writes and reads

go version
go version go1.13 linux/amd64

go test -bench=. -benchmem -benchtime=4s ./... -timeout 30m
goos: linux
goarch: amd64
pkg: github.com/allegro/bigcache/v3/caches_bench
BenchmarkMapSet-8                     	12999889	       376 ns/op	     199 B/op	       3 allocs/op
BenchmarkConcurrentMapSet-8           	 4355726	      1275 ns/op	     337 B/op	       8 allocs/op
BenchmarkFreeCacheSet-8               	11068976	       703 ns/op	     328 B/op	       2 allocs/op
BenchmarkBigCacheSet-8                	10183717	       478 ns/op	     304 B/op	       2 allocs/op
BenchmarkMapGet-8                     	16536015	       324 ns/op	      23 B/op	       1 allocs/op
BenchmarkConcurrentMapGet-8           	13165708	       401 ns/op	      24 B/op	       2 allocs/op
BenchmarkFreeCacheGet-8               	10137682	       690 ns/op	     136 B/op	       2 allocs/op
BenchmarkBigCacheGet-8                	11423854	       450 ns/op	     152 B/op	       4 allocs/op
BenchmarkBigCacheSetParallel-8        	34233472	       148 ns/op	     317 B/op	       3 allocs/op
BenchmarkFreeCacheSetParallel-8       	34222654	       268 ns/op	     350 B/op	       3 allocs/op
BenchmarkConcurrentMapSetParallel-8   	19635688	       240 ns/op	     200 B/op	       6 allocs/op
BenchmarkBigCacheGetParallel-8        	60547064	        86.1 ns/op	     152 B/op	       4 allocs/op
BenchmarkFreeCacheGetParallel-8       	50701280	       147 ns/op	     136 B/op	       3 allocs/op
BenchmarkConcurrentMapGetParallel-8   	27353288	       175 ns/op	      24 B/op	       2 allocs/op
PASS
ok  	github.com/allegro/bigcache/v3/caches_bench	256.257s

Writes and reads in bigcache are faster than in freecache. Writes to map are the slowest.

GC pause time

go version
go version go1.13 linux/amd64

go run caches_gc_overhead_comparison.go

Number of entries:  20000000
GC pause for bigcache:  1.506077ms
GC pause for freecache:  5.594416ms
GC pause for map:  9.347015ms
go version
go version go1.13 linux/arm64

go run caches_gc_overhead_comparison.go
Number of entries:  20000000
GC pause for bigcache:  22.382827ms
GC pause for freecache:  41.264651ms
GC pause for map:  72.236853ms

Test shows how long are the GC pauses for caches filled with 20mln of entries. Bigcache and freecache have very similar GC pause time.

Memory usage

You may encounter system memory reporting what appears to be an exponential increase, however this is expected behaviour. Go runtime allocates memory in chunks or 'spans' and will inform the OS when they are no longer required by changing their state to 'idle'. The 'spans' will remain part of the process resource usage until the OS needs to repurpose the address. Further reading available here.

How it works

BigCache relies on optimization presented in 1.5 version of Go (issue-9477). This optimization states that if map without pointers in keys and values is used then GC will omit its content. Therefore BigCache uses map[uint64]uint32 where keys are hashed and values are offsets of entries.

Entries are kept in byte slices, to omit GC again. Byte slices size can grow to gigabytes without impact on performance because GC will only see single pointer to it.

Collisions

BigCache does not handle collisions. When new item is inserted and it's hash collides with previously stored item, new item overwrites previously stored value.

Bigcache vs Freecache

Both caches provide the same core features but they reduce GC overhead in different ways. Bigcache relies on map[uint64]uint32, freecache implements its own mapping built on slices to reduce number of pointers.

Results from benchmark tests are presented above. One of the advantage of bigcache over freecache is that you don’t need to know the size of the cache in advance, because when bigcache is full, it can allocate additional memory for new entries instead of overwriting existing ones as freecache does currently. However hard max size in bigcache also can be set, check HardMaxCacheSize.

HTTP Server

This package also includes an easily deployable HTTP implementation of BigCache, which can be found in the server package.

More

Bigcache genesis is described in allegro.tech blog post: writing a very fast cache service in Go

License

BigCache is released under the Apache 2.0 license (see LICENSE)

bigcache's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bigcache's Issues

Item is not getting updated

var (
	ErrMediaNotFound = errors.New("Media file not found")
	ErrInvalidId = errors.New("Invalid ID")
	cache *bigcache.BigCache
	db *database.Instance
)

type MediaReference struct {
	URL 	string  `json:"url"`
	Views 	int		`json:"views"`
}

func DecodeHex(s string) (bson.ObjectId, error) {
	d, err := hex.DecodeString(s)
	if err != nil || len(d) != 12 {
		return "", ErrInvalidId
	}
	return bson.ObjectId(d), nil
}

func GetMediaURL(key string) (string, error) {
	// First check in the BigCache
	if entry, err := cache.Get(key); err == nil {
		ref := MediaReference{}
		err = ref.UnmarshalJSON(entry)
		if err != nil {
			return "", err
		}
		ref.Views++
		data, err := ref.MarshalJSON()
		if err != nil {
			return "", err
		}

		fmt.Println("count", string(data))

		err = cache.Set(key, data)
		if err != nil {
			return "", err
		}

		return GS_URL + ref.URL, err
	}
	// Then perform the search in the database
	obj, err := DecodeHex(key)
	if err != nil {
		return "", err
	}
	media, err := db.FindMediaById(obj)
	if err != nil {
		return "", ErrMediaNotFound
	}
	// Store the URL in the BigCache
	ref := MediaReference{URL: media.Address, Views: 1}
	data, err := ref.MarshalJSON()
	if err != nil {
		return "", err
	}
	cache.Set(key, data)

	return GS_URL + media.Address, nil
}

func onRemoveEntity(key string, entry []byte) {
	ref := MediaReference{}
	err := ref.UnmarshalJSON(entry)
	if err == nil {
		fmt.Println("delete", string(entry))
		go db.AddViewsToPostBasedOnMedia(bson.ObjectIdHex(key), ref.Views)
	}
}

func InitBigCache(development bool, database *database.Instance) error {
	config := bigcache.Config {
		// number of shards (must be a power of 2)
		Shards: 1024,
		// time after which entry can be evicted
		LifeWindow: 10 * time.Second,
		// rps * lifeWindow, used only in initial memory allocation
		MaxEntriesInWindow: 1000 * 10 * 60,
		// max entry size in bytes, used only in initial memory allocation
		MaxEntrySize: 500,
		// prints information about additional memory allocation
		Verbose: development,
		// cache will not allocate more memory than this limit, value in MB
		// if value is reached then the oldest entries can be overridden for the new ones
		// 0 value means no size limit
		HardMaxCacheSize: 8192,
		// callback fired when the oldest entry is removed because of its
		// expiration time or no space left for the new entry. Default value is nil which
		// means no callback and it prevents from unwrapping the oldest entry.
		OnRemove: onRemoveEntity,
	}

	db = database

	var err error
	cache, err = bigcache.NewBigCache(config)
	if err != nil {
		return err
	}

	return nil
}

The code above performs view counting for cached items.
A view counts when the record is accessed and by the time the record should be evicted it saves the number of views to the database.
The problem is when the "OnRemove" callback fires and the stored json is unwrapped the view counter is wrong. Looks like it's either not updating the record or returns me some different cached item.

The log trace

count {"url":"5918e11a3cc6b2e22682218b/1497228828418322900.jpg","views":2}
count {"url":"5918e11a3cc6b2e22682218b/1497228828418322900.jpg","views":3}
count {"url":"5918e11a3cc6b2e22682218b/1497228828418322900.jpg","views":4}
delete {"url":"5918e11a3cc6b2e22682218b/1497228828418322900.jpg","views":1} <--- ???

Data race when running test with 1000 goroutines on Travis CI

I'm developing a key-value store abstraction and implementation / wrapper package for Go and one of the implementations is for BigCache. I have a test that launches 1000 goroutines to concurrently interact with the underlying store. On my local machine it works fine all the time, but on Travis CI I sometimes get this warning and then a subsequent error:

WARNING: DATA RACE
Write at 0x00c43a932018 by goroutine 163:
  runtime.slicecopy()
      /home/travis/.gimme/versions/go1.10.linux.amd64/src/runtime/slice.go:192 +0x0
  github.com/allegro/bigcache/queue.(*BytesQueue).push()
      /home/travis/gopath/src/github.com/allegro/bigcache/queue/bytes_queue.go:129 +0x2ca
  github.com/allegro/bigcache/queue.(*BytesQueue).Push()
      /home/travis/gopath/src/github.com/allegro/bigcache/queue/bytes_queue.go:81 +0xf0
  github.com/allegro/bigcache.(*cacheShard).set()
      /home/travis/gopath/src/github.com/allegro/bigcache/shard.go:75 +0x209
  github.com/allegro/bigcache.(*BigCache).Set()
      /home/travis/gopath/src/github.com/allegro/bigcache/bigcache.go:117 +0x153
  github.com/philippgille/gokv/bigcache.Store.Set()
      /home/travis/gopath/src/github.com/philippgille/gokv/bigcache/bigcache.go:42 +0x1e3
  github.com/philippgille/gokv/bigcache.(*Store).Set()
      <autogenerated>:1 +0xa0
  github.com/philippgille/gokv/test.InteractWithStore()
      /home/travis/gopath/src/github.com/philippgille/gokv/test/test.go:306 +0x1d0

For the full test output see: https://travis-ci.org/philippgille/gokv/builds/468489707#L1206

The test itself is: https://github.com/philippgille/gokv/blob/e48dea7fdf56ca55fecd32be28d8fd895682ae3a/bigcache/bigcache_test.go#L42
The implementation is: https://github.com/philippgille/gokv/blob/e48dea7fdf56ca55fecd32be28d8fd895682ae3a/bigcache/bigcache.go

Is this an error in the way I use BigCache? Or is this a bug in BigCache itself?

1.9 Concurrent Maps

Was looking at tip for 1.9, there is a new feature called Concurrent Maps, wanted to start a discussion around adding support for concurrent cache ops.

Not sure on the performance of it, but the design seems fairly solid. Looks like it uses a lot of unsafe.Pointer types and atomic ops, so it might be fairly fast.

Redis

Is this a good substitute for an external redis cache server?

Old entries (un-updated) values stored in the byte queue

Hello,

Unless I am misunderstanding, when you remove (or evict) an entry to free up space - past instructions can still exist in that queue. That means the following:

SET:

  • ABC = 1
  • EDF = 1
  • ABC = 2

EVICT:

  • ABC
  • EDF

ABC still exists (due to removing only a single occurrence of it in the queue) On an update should we not be overwriting a keys previous value even in the byte allocation? Otherwise an evict has to run X number of times for X number of changes to the same key name to actually evict the key.

New release

Could you release a new version with last features and bugfixes, please?

Short term goals for HTTP server

Now that the original maintainers were graciously willing to allow me to help with the vision and execution of this project, I wanted to talk about some of the things that I am thinking of for the project. As I helped with the HTTP server, a lot of my focus is geared around it's implementation and improving upon it's implementation. It was a great 1.0 version, but it definitely has a long way to go before I consider it to be production ready.

I wanted to start a discussion around where it's current short comings are that I want to fix relatively quickly and some short term goals I think would be nice to have. I would like feedback on these items and would love to hear if you think there is a better idea we can go down.

Quick fixes that I want to wrap into what I think should go into a 1.2 release:

  • SSL support. Security should always be the focus, and I want to to be a default config. I'm thinking of having a flag similar to -IWantNoSecurity or something where the operator has to opt in. Let's Encrypt as an SSL provider has made free, trusted certificates so readily available that I would like to make sure we have high standards for security. Until I joined, the original maintenance and development team has had high standards, I want to continue that legacy.
  • Authentication for the API for adding items to the cache. Since security is important, we don't want an attacker to be able to just overload a public cache and crash it repeatedly. I currently don't think reads should be authenticated at this point, but I'd be more than open it in a later version. Currently I'm thinking of a master API key that can be set when the server is initialised. MVP on this would be a CLI flag.
  • CLI deliverable for release. It would be good to have separate binary downloadables for the HTTP server when we release a new version. That way operators/engineers can quickly get up and running without needing to compile themselves.
  • Deployment manifests? This one I'm on the fence about. Full disclosure, my employer is invested into the core project maintenance with Kubernetes and Cloud Foundry. I personally do not have a role in that maintenance, but I do like consuming both projects for personal use. I'm thinking of adding reference deployment manifests for both Cloud Foundry and Kubernetes, so if a user does have the code base, they can just push the HTTP server with some defaults. While my focus with this idea is to promote the available of this awesome project, it's definitely not a priority. I'm okay with pushing this off til someone asks for it or one of us decides it's more important than it is now.

Would love to get some feedback on these items. I haven't started the work yet but would like to soon.

Minor version bump to v1.1.0?

Now the package is versioned, it could be good to have a minor version bump to 1.1.0 since there's a new package. and some enhancements. Below are some proposed release notes.

v1.1.0 (2017-11-30)

Full Changelog

Implemented enhancements:

  • 1.9 Concurrent Maps #38

Closed issues:

  • Proposal: Server Package #56
  • Gathering stats #55
  • Update release notes for v1.0.0 #52
  • Custom logging #36
  • No way to increment value without updating the timestamp #35

Merged pull requests:

GC Pause time for maps.

I understand that benchmarks will depend on the machine used.
Following is from a 2017 Macbook pro (2.8 GHz Intel Core i7, 16 GB 2133 MHz LPDDR3)

However native Maps seems to have had a massive improvement. Is this due to go 1.9?

go version go1.9.2 darwin/amd64

go run caches_gc_overhead_comparison.go
Number of entries:  20000000
GC pause for bigcache:  45.222µs
GC pause for freecache:  37.043µs
GC pause for map:  59.572µs

Clear Cache

Hay there will it be possible to add clear cache method
Thanks

Eviction not perform after timeout

I am new to golang and looking for a cache library.
after read your blog, I write a small code for study cache set/get/expire. Now I have a issue, expire not working after timeout. still can get entry from bigcache. Did I missing something?

package main

import (
	"fmt"
	"github.com/allegro/bigcache"
	"log"
	"time"
)

func main() {
	lifetime := 10 * time.Second

/*
	onRemove := func(key string, entry []byte) {
		log.Println(key)
	}
	config := bigcache.Config{
		// number of shards (must be a power of 2)
		Shards: 1024,
		// time after which entry can be evicted
		LifeWindow: lifetime,
		// rps * lifeWindow, used only in initial memory allocation
		MaxEntriesInWindow: 1000 * 10 * 60,
		// max entry size in bytes, used only in initial memory allocation
		MaxEntrySize: 500,
		// prints information about additional memory allocation
		Verbose: true,
		// cache will not allocate more memory than this limit, value in MB
		// if value is reached then the oldest entries can be overridden for the new ones
		// 0 value means no size limit
		HardMaxCacheSize: 8192,
		// callback fired when the oldest entry is removed because of its
		// expiration time or no space left for the new entry. Default value is nil which
		// means no callback and it prevents from unwrapping the oldest entry.
		OnRemove: onRemove,
	}

	cache, initErr := bigcache.NewBigCache(config)
*/
    cache, initErr := bigcache.NewBigCache(bigcache.DefaultConfig(10 * time.Second))
	if initErr != nil {
		log.Fatal(initErr)
	}

	cache.Set("my-unique-key", []byte("value"))

	if entry, err := cache.Get("my-unique-key"); err == nil {
		fmt.Println(string(entry))
	}

	timer := time.NewTimer(lifetime + time.Second)
	<-timer.C

	// Eviction is performed during writes to the cache since the lock is already acquired.
	cache.Set("key2", []byte("val2"))

	if entry2, err2 := cache.Get("my-unique-key"); err2 == nil {
		fmt.Println(string(entry2))
	} else {
		log.Println("not found")
	}
}

Goroutine leak in NewBigCache

An invocation of bigcache.NewBigCache starts a goroutine to handle cache cleanup before returning a reference to cache to the caller.

Once the returned reference cache goes out of scope in the caller's code, though, the garbage collector will be unable to collect it, as the cache cleanup goroutine will continue to run and therefore will be leaked.

To fix this problem, the BigCache type should probably have a Close function to stop its helper goroutine.

/cc @mxplusb

Memory usage grows indefinitely

Hello,

I've been playing around with bigcache and I've noticed that calling Set() with the same keys causes the memory usage to grow indefinitely. Here's an example:

cache, _ := bigcache.NewBigCache(bigcache.DefaultConfig(10 * time.Minute))
data := []byte("TESTDATATESTDATATESTDATATESTDATATESTDATATESTDATATESTDATA")

for {
	for i := 0; i < 10000; i++ {
		cache.Set(strconv.Itoa(i), data)
	}
	time.Sleep(time.Second)
}

Running that causes the memory usage of the application to grow indefinitely until I ran out of memory.. Is this expected behaviour? I'm just using bigcache as if it was a concurrent map and I would have expected the elements to get replaced and thus memory usage shouldn't grow beyond what is necessary for 10,000 elements.

Gathering stats

I've a proposal to collect hit and miss(something else?) metrics.
So it'll allow to see performance of the cache and tweak TLL and size of the data.

If this will be accepted I would like to be assigned to this 😉

Possible datarace

I was looking into the shard.go, and saw the pattern below which looks racey to me. I haven't gone through all the steps to fuzz or repro it, but thought I'd post it here to get your thoughts about it.
Possibly problematic code :

func (s *cacheShard) del(key string, hashedKey uint64) error {
	s.lock.RLock()
	itemIndex := s.hashmap[hashedKey]

	if itemIndex == 0 {
		s.lock.RUnlock()
		s.delmiss()
		return ErrEntryNotFound
	}

	wrappedEntry, err := s.entries.Get(int(itemIndex))
	if err != nil {
		s.lock.RUnlock()
		s.delmiss()
		return err
	}
	s.lock.RUnlock()

	s.lock.Lock()
	{
		delete(s.hashmap, hashedKey)
		s.onRemove(wrappedEntry, Deleted)
		resetKeyFromEntry(wrappedEntry)
	}
	s.lock.Unlock()

	s.delhit()
	return nil
}

Multiple readers may enter the first section in paralell, each getting an identical wrappedEntry for an entry to delete. They will then enter the writelocked section sequentially, and resetKeyFromEntry will write destructively on the data. It's possible that the sequential deletion of the same element is fine, but even so it might be that another write reuses the data, and the next of these sequential reads will overwrite the data.

The proper pattern should be to either wrap everything within write-lock, or redo the first check once the writelock is obtained.

func (s *cacheShard) del(key string, hashedKey uint64) error {
	s.lock.RLock()
	itemIndex := s.hashmap[hashedKey]

	if itemIndex == 0 {
		s.lock.RUnlock()
		s.delmiss()
		return ErrEntryNotFound
	}

	wrappedEntry, err := s.entries.Get(int(itemIndex))
	if err != nil {
		s.lock.RUnlock()
		s.delmiss()
		return err
	}
	s.lock.RUnlock()

	s.lock.Lock()
	{

		itemIndex = s.hashmap[hashedKey]

		if itemIndex == 0 {
			s.lock.Unlock()
			s.delmiss()
			return ErrEntryNotFound
		}

		wrappedEntry, err = s.entries.Get(int(itemIndex))
		if err != nil {
			s.lock.Unlock()
			s.delmiss()
			return err
		}

		delete(s.hashmap, hashedKey)
		s.onRemove(wrappedEntry, Deleted)
		resetKeyFromEntry(wrappedEntry)
	}
	s.lock.Unlock()
	s.delhit()
	return nil
}

I'm not fully versed in the codebase, so apologies if I've misunderstood something.

Expose ByteQueue capacity from Cache methods

The cache already exposes the number of entries in the cache through Len() and some additional useful statistics through Stats(). I would like to be able to get the underlying size in bytes of the cache, either from a method or as a new field of the Stats type.

The ByteQueue package already have a Capacity() methods that returns exactly the information I need. Could we somehow, using one of the two techniques before, expose the size of the cache ?

Thanks

ARC cache ?

Does it possible to create arc cache using this library?

Lots of extra memory was consumed when create and delete bigcache interleavely

What version of bigcache are you using?

commit: 62db144

What version of Go are you using (go version)?

go version go1.9.2 linux/amd64

What operating system (Linux, Windows, …) and version?

Linux 4.4.0-87-generic 14.04.1-Ubuntu SMP Tue Jul 18 14:51:32 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

What did you do?

package main

import (
	"testing"
	"time"

	log "github.com/sirupsen/logrus"

	"github.com/allegro/bigcache"
)

func newBigCache() *bigcache.BigCache {
	config := bigcache.Config{
		Shards:             128,
		LifeWindow:         200 * time.Minute,
		MaxEntriesInWindow: 10 * 1000,
		MaxEntrySize:       1024 * 2,
		Verbose:            true,
	}

	c, _ := bigcache.NewBigCache(config)
	c.Set("a", []byte{'b'})
	return c
}

func testBigCacheMemCost(t *testing.T, num int) []*bigcache.BigCache {

	cacheArr := make([]*bigcache.BigCache, num)

	for i := 0; i < num; i++ {
		cacheArr[i] = newBigCache()
		//log.Info("newBigCache: ", i)
	}
	return cacheArr
}

func TestBigCacheMemCost1(t *testing.T) {
	cacheNum := 500
	cache1 := testBigCacheMemCost(t, cacheNum)
	log.Infof("-> clean cache 1 <-")
	for _, c := range cache1 {
		c.Reset()
	}
	cache1 = nil

	cache2 := testBigCacheMemCost(t, cacheNum)
	log.Infof("-> clean cache 2 <-")
	for _, c := range cache2 {
		c.Reset()
	}
	cache2 = nil
	time.Sleep(15 * time.Second)
}

func TestBigCacheMemCost2(t *testing.T) {
	cacheNum := 500
	cache1 := testBigCacheMemCost(t, cacheNum)
	cache2 := testBigCacheMemCost(t, cacheNum)

	log.Info("-> clean cache <-")
	for _, c := range cache1 {
		c.Reset()
	}
	cache1 = nil
	for _, c := range cache2 {
		c.Reset()
	}
	cache2 = nil
	time.Sleep(15 * time.Second)
}

func TestBigCacheMemCost3(t *testing.T) {
	cache := testBigCacheMemCost(t, 1000)

	log.Info("-> clean cache <-")
	for _, c := range cache {
		c.Reset()
	}
	cache = nil
	time.Sleep(15 * time.Second)
}

What did you expect to see?

The memory consumed by TestBigCacheMemCost1, TestBigCacheMemCost2, TestBigCacheMemCost3 should be mostly equal.

What did you see instead?

TestBigCacheMemCost1 take much more memory(RSS) than TestBigCacheMemCost2 and TestBigCacheMemCost3. If we set cacheNum 1000, the symptom would be more apparently.

so what's the problem with this program, any advice and suggestions will be greatly appreciated!
Thanks!

Update release notes for v1.0.0

The release notes for v1.0.0 won't help new users. Below is a template which can help.

* This Change Log was automatically generated by github_changelog_generator


Implemented enhancements:

  • Feature Request: support to clean expired entries #41 (flisky)
  • Improve fnv64a hashing algorithm by remove all allocations #19 (mateuszdyminski)

Fixed bugs:

  • Hard max cache size exceeded #18
  • Entries indexes stay unchanged after additional memory allocation #8 (druminski)

Closed issues:

  • Item is not getting updated #39
  • Eviction not perform after timeout #31
  • Old entries (un-updated) values stored in the byte queue #30
  • Push emptyBlobLen inside the allocateAdditionalMemory() function #29
  • ARC cache ? #28
  • Redis #27
  • Clear Cache #23
  • What is the purpose of "entries queue.BytesQueue" #21
  • Event handlers for evict? #14
  • Add per entry expiration #13
  • Add memory allocation hard limit #4

Merged pull requests:

Why does cacheShard.set call onEvict?

I have two problems with this:

  1. It is redundant, the oldest entry will be removed anyway if space is needed for the new entry
  2. It can cause set to invoke onRemoveWithReason with both "Expired" and "NoSpace" This is troublesome because cleanUp also invokes onRemoveWithReason with "Expired," making it impossible to differentiate between when set invokes onRemoveWithReason and when cleanUp invokes onRemoveWithReason

Proposed solution: Only remove the oldest entry in set with reason "NoSpace"

The point in all this is that each function/goroutine which might remove something from the cache should have its own distinct reason for doing so, thus preventing overlap and therefore allowing a developer to know precisely why an entry was removed.,

Feedback from Community Talk

I totally forgot to capture this feedback live, and that's something I regret doing, so I will try to capture this well. I'm tagging the people I can remember being there, so if you see someone from the meetup is missing and you want to make sure their feedback is captured, please add them!

Here is the talk. The audio is sketchy for some reason, but next time I give this talk, I can make sure it's better.

  • Everyone loved the performance focus!
  • What are the use cases?
  • Does the cache persist past program execution? i.e., can you restart the cache and have it rebuild itself
  • Is there an option to distribute the cache? i.e., cache clustering
  • How do we handle eviction?
  • Is the cache thread safe?
  • What is the performance like with billions of objects/hundreds of GBs?

Would love to really involve the community on this one!

//cc @enocom @jasonkeene @kcboyle

Per Item TTL

Any thoughts on having a per item TTL similar to memcache/freecache/redis/etc? Unless I missed something, it appears that's not possible right now.

I think it'd be fairly easy to implement (and I can send PR if I do). Just wanted to start the discussion.

Bigcache short description

We are working on an article about the current state of concurrent cache implementations in Go. We are planning to include bigcache with following short description. Please let me know if my understanding is correct, or you would like to modify it.

BigCache divides the data into shards based on the hash of the key. Each shard
contains a map and a ring buffer. Whenever a new element is set, it appends the
element in the ring buffer of the corresponding shard and the offset into the
buffer is stored in the map. If the same element is set more than once, the
previous entries in the buffer are marked invalid. If the buffer is too small,
it is expanded until the maximum capacity is reached.

The map stores data from uint32 to uint32. Each key is hashed into a uint32
that becomes the key of the map. The value of the map points to the offset
into the buffer where value is stored along with metadata information. If
there are hash collisions, BigCache ignores the previous key and stores the
current one into the map.

items never expire?

I just realised that my items don't expire within 1 min. how to expire items?

    LifeWindow: 1 * time.Minute,

Dependent cache items

This is one feature I found missing in all type of caches. The cached items cant be atomically made dependant to one another, the simple case would be paging:

GET /somepage/paging-1 --> cache.set("/something/paging-1", content)
GET /somepage/paging-2 --> cache.set("/something/paging-2", content)
GET /somepage/paging-3 --> cache.set("/something/paging-3", content)
POST /somepage --> cache.invalidate("/something/paging-*)

The iterator can be used, but if a lot of entries are cached, it really isnt efficient. Some additional logic would be fine, to specify that "/something/paging-1" is dependant to "/something/" and invalidates/calls hook if "/something/" has changed. It is not an issue to implement a map but since it has to be atomic to avoid race conditions (one client invalidates cache, the second is requesting it), it would be really nice if it would be implemented on bigcache level to automatically create a hierarchy where changing one item invalidates all dependent items.

Hard max cache size exceeded

Howdy,

Last week I was able to deploy my new version of ClueGetter to production. This morning Icinga alerted that pretty much everywhere where I had deployed last week we were running out of ram. A profile of the heap shows: http://storage8.static.itmages.com/i/16/0523/h_1463999604_2310069_9f039a5d50.png

How it's initialized can be seen here: https://github.com/Freeaqingme/ClueGetter/blob/develop/src/cluegetter/messagePersist.go#L431 . Most notably the: 'HardMaxCacheSize: 1024'.

I realize I'm actually instantiating two caches. Even though msgIdIdx should only contain a few MB max. However, 2x1024 MB would be ~2 GiB. Not the 2977.6+444.59 MBthat the heap profile shows?

Branching strategy

I've noticed recently we've had to do a few merges and I've fixed some conflicts on my end.

Currently we're using the topic branch strategy, and I'd like to move to the long-running branch strategy. See: https://git-scm.com/book/en/v2/Git-Branching-Branching-Workflows

Using the long-running branch strategy is pretty common, and it would allow us to make and merge changes as needed for development, then we'll merge into master for production releases. The goal is to protect master a bit and not have to worry about merges or other conflicts in master. We can resolve all of that in develop (or whatever we want to call it) and then each merge into master is a new release.

Thoughts?

Custom logging

It would be nice if you could pass a custom logger when setting bigcache.Config.Verbose.

No way to increment value without updating the timestamp

These commonly used methods are missing:

  1. Increment/Decrement integer-based value without forcing a set, which puts a new timestamp - this is a problem in that, when I need to collect counts for keys which are popular, they may never expire (lets say expiration/eviction triggers a callback function which writes count value every "5 minutes" to log file)

  2. Delete key - there is not way to remove a key or force eviction function on particular keys, overwriting assumes one has a new value

Nice to have: When calling "Reset" would be nice to trigger OnRemove callback or to provide another function which does that

Unable to get/set data in onremove callback

Is it possible to GET/SET values inside onremove callback?

import(
  "github.com/allegro/bigcache"
)

var cache *bigcache.BigCache
onRemove := func(key string, entry []byte) {
  // THIS DOESNT WORK
  f, err := cache.Get("test")
}


config := bigcache.Config{
  // number of shards (must be a power of 2)
  Shards: 5,
  // time after which entry can be evicted
  LifeWindow: 5 * time.Second,
  // Interval between removing expired entries (clean up).
  // If set to <= 0 then no action is performed. Setting to < 1 second is counterproductive — bigcache has a one second resolution.
  CleanWindow: 6 * time.Second,
  // max entry size in bytes, used only in initial memory allocation
  MaxEntrySize: 5000,
  // prints information about additional memory allocation
  Verbose: false,
  // callback fired when the oldest entry is removed because of its expiration time or no space left
  // for the new entry, or because delete was called. A bitmask representing the reason will be returned.
  // Default value is nil which means no callback and it prevents from unwrapping the oldest entry.
  OnRemove: onRemove,
}
cache, err = bigcache.NewBigCache(config)

Proposal: Server Package

It would be awesome to see a basic/reference HTTP implementation package here so this can be easily deployed as a basic HTTP cache.

Pros:

  • So fast.
  • Easily deployable to Cloud Foundry, Heroku, etc.

Here is what I am thinking. An API path like this:

GET /api/v1/{key}
PUT /api/v1/{key}

For configuration:

{
	"LifeWindow": "10m",
	"MaxEntrySize": "500KB",
	"Verbose": true,
	"HardMaxCacheSize": "8192KB"
}

Configuration can be read from either CLI flags, environment variables, or .bigcacherc. Some of the cache configurations can be inferred/defaulted.

Collaborator Permissions?

I would like to request access to the project as a collaborator. I've contributed the server implementation (making me the second largest contributor), increased performance by 20% for parallel sets by fixing the locking strategy, and maintained contribution activity through comments in various issues and PRs.

Having collaborator access would allow me to work in branches in the main project as well as merge requests from the community so long as they meet the project standards. My goal is to get earlier feedback on useful features from various branches I think would be good (mostly for the web server) and maintain a higher level of commitment to the project.

I promise I won't git push -f master. 😄

benchmarks gives error

caches_bench git:(master) go test -bench=. -benchtime=10s ./... -timeout 30m 
goos: linux
goarch: amd64
pkg: github.com/allegro/bigcache/caches_bench
BenchmarkMapSet-8                     	50000000	       477 ns/op
BenchmarkConcurrentMapSet-8           	10000000	      1066 ns/op
BenchmarkFreeCacheSet-8               	30000000	       746 ns/op
BenchmarkBigCacheSet-8                	30000000	       449 ns/op
BenchmarkMapGet-8                     	50000000	       275 ns/op
BenchmarkConcurrentMapGet-8           	50000000	       344 ns/op
BenchmarkFreeCacheGet-8               	30000000	       642 ns/op
BenchmarkBigCacheGet-8                	30000000	       420 ns/op
BenchmarkBigCacheSetParallel-8        	fatal error: runtime: out of memory

runtime stack:
runtime.throw(0x549559, 0x16)
	/usr/lib/go/src/runtime/panic.go:608 +0x72
runtime.sysMap(0xc704000000, 0x4000000, 0x65a5f8)
	/usr/lib/go/src/runtime/mem_linux.go:156 +0xc7
runtime.(*mheap).sysAlloc(0x640820, 0x4000000, 0x640838, 0x7fd4969a8ac0)
	/usr/lib/go/src/runtime/malloc.go:619 +0x1c7
runtime.(*mheap).grow(0x640820, 0x1, 0x0)
	/usr/lib/go/src/runtime/mheap.go:920 +0x42
runtime.(*mheap).allocSpanLocked(0x640820, 0x1, 0x65a608, 0x400)
	/usr/lib/go/src/runtime/mheap.go:848 +0x337
runtime.(*mheap).alloc_m(0x640820, 0x1, 0x7, 0x7fd46baadfff)
	/usr/lib/go/src/runtime/mheap.go:692 +0x119
runtime.(*mheap).alloc.func1()
	/usr/lib/go/src/runtime/mheap.go:759 +0x4c
runtime.(*mheap).alloc(0x640820, 0x1, 0x7fd46b010007, 0x7fd4969a8b58)
	/usr/lib/go/src/runtime/mheap.go:758 +0x8a
runtime.(*mcentral).grow(0x641d18, 0x0)
	/usr/lib/go/src/runtime/mcentral.go:232 +0x94
runtime.(*mcentral).cacheSpan(0x641d18, 0x1969a8b58)
	/usr/lib/go/src/runtime/mcentral.go:106 +0x2f8
runtime.(*mcache).refill(0x7fd4c2b10d80, 0x45a707)
	/usr/lib/go/src/runtime/mcache.go:122 +0x95
runtime.(*mcache).nextFree.func1()
	/usr/lib/go/src/runtime/malloc.go:749 +0x32
runtime.systemstack(0x0)
	/usr/lib/go/src/runtime/asm_amd64.s:351 +0x66
runtime.mstart()
	/usr/lib/go/src/runtime/proc.go:1229

goroutine 147 [running]:
runtime.systemstack_switch()
	/usr/lib/go/src/runtime/asm_amd64.s:311 fp=0xc011a74cd8 sp=0xc011a74cd0 pc=0x456d60
runtime.(*mcache).nextFree(0x7fd4c2b10d80, 0x7, 0x2, 0x1, 0xc)
	/usr/lib/go/src/runtime/malloc.go:748 +0xb6 fp=0xc011a74d30 sp=0xc011a74cd8 pc=0x40b036
runtime.mallocgc(0x20, 0x0, 0x0, 0x0)
	/usr/lib/go/src/runtime/malloc.go:903 +0x793 fp=0xc011a74dd0 sp=0xc011a74d30 pc=0x40b983
runtime.slicebytetostring(0x0, 0xc000012160, 0x11, 0x20, 0x0, 0x0)
	/usr/lib/go/src/runtime/string.go:102 +0x9f fp=0xc011a74e00 sp=0xc011a74dd0 pc=0x443fcf
fmt.Sprintf(0x5475ee, 0xd, 0xc011a74e90, 0x2, 0x2, 0x5dd2972212259774, 0xc011a74f14)
	/usr/lib/go/src/fmt/print.go:204 +0x92 fp=0xc011a74e58 sp=0xc011a74e00 pc=0x49ab52
github.com/allegro/bigcache/caches_bench.parallelKey(0x22c, 0x98b595, 0x11, 0xc011a74f14)
	/home/anders/go/src/github.com/allegro/bigcache/caches_bench/caches_bench_test.go:206 +0xbf fp=0xc011a74ec0 sp=0xc011a74e58 pc=0x4f9d8f
github.com/allegro/bigcache/caches_bench.BenchmarkBigCacheSetParallel.func1(0xc0a76d6000)
	/home/anders/go/src/github.com/allegro/bigcache/caches_bench/caches_bench_test.go:111 +0x74 fp=0xc011a74f90 sp=0xc011a74ec0 pc=0x4f9ee4
testing.(*B).RunParallel.func1(0xc0000145c0, 0xc0000145b8, 0xc0000145b0, 0xc000094000, 0xc0000541d0)
	/usr/lib/go/src/testing/benchmark.go:626 +0x97 fp=0xc011a74fb8 sp=0xc011a74f90 pc=0x4b7977
runtime.goexit()
	/usr/lib/go/src/runtime/asm_amd64.s:1333 +0x1 fp=0xc011a74fc0 sp=0xc011a74fb8 pc=0x458cc1
created by testing.(*B).RunParallel
	/usr/lib/go/src/testing/benchmark.go:619 +0x192

goroutine 1 [chan receive, 1 minutes]:
testing.(*B).doBench(0xc000094000, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go/src/testing/benchmark.go:260 +0x77
testing.(*benchContext).processBench(0xc00000a0c0, 0xc000094000)
	/usr/lib/go/src/testing/benchmark.go:447 +0x2ec
testing.(*B).run(0xc000094000)
	/usr/lib/go/src/testing/benchmark.go:251 +0x74
testing.(*B).Run(0xc000094340, 0x54a7a3, 0x1c, 0x54f0e8, 0x4adf00)
	/usr/lib/go/src/testing/benchmark.go:515 +0x2e1
testing.runBenchmarks.func1(0xc000094340)
	/usr/lib/go/src/testing/benchmark.go:416 +0x78
testing.(*B).runN(0xc000094340, 0x1)
	/usr/lib/go/src/testing/benchmark.go:141 +0xb2
testing.runBenchmarks(0x54d07d, 0x28, 0xc00000a0a0, 0x6388c0, 0xe, 0xe, 0x80)
	/usr/lib/go/src/testing/benchmark.go:422 +0x323
testing.(*M).Run(0xc0000ac000, 0x0)
	/usr/lib/go/src/testing/testing.go:1040 +0x37c
main.main()
	_testmain.go:68 +0x13d

goroutine 15 [semacquire]:
sync.runtime_Semacquire(0xc0000145c8)
	/usr/lib/go/src/runtime/sema.go:56 +0x39
sync.(*WaitGroup).Wait(0xc0000145c0)
	/usr/lib/go/src/sync/waitgroup.go:130 +0x64
testing.(*B).RunParallel(0xc000094000, 0xc0000541d0)
	/usr/lib/go/src/testing/benchmark.go:629 +0x1b2
github.com/allegro/bigcache/caches_bench.BenchmarkBigCacheSetParallel(0xc000094000)
	/home/anders/go/src/github.com/allegro/bigcache/caches_bench/caches_bench_test.go:107 +0xc3
testing.(*B).runN(0xc000094000, 0x5f5e100)
	/usr/lib/go/src/testing/benchmark.go:141 +0xb2
testing.(*B).launch(0xc000094000)
	/usr/lib/go/src/testing/benchmark.go:290 +0xbf
created by testing.(*B).doBench
	/usr/lib/go/src/testing/benchmark.go:259 +0x59

goroutine 146 [runnable]:
fmt.(*pp).free(0xc0a76d4000)
	/usr/lib/go/src/fmt/print.go:141 +0xb4
fmt.Sprintf(0x5475ee, 0xd, 0xc011a5ce90, 0x2, 0x2, 0x4fb212810910e3e3, 0xc011a5cf14)
	/usr/lib/go/src/fmt/print.go:205 +0xb4
github.com/allegro/bigcache/caches_bench.parallelKey(0x7, 0x94e699, 0x10, 0xc011a5cf14)
	/home/anders/go/src/github.com/allegro/bigcache/caches_bench/caches_bench_test.go:206 +0xbf
github.com/allegro/bigcache/caches_bench.BenchmarkBigCacheSetParallel.func1(0xc0a76d2000)
	/home/anders/go/src/github.com/allegro/bigcache/caches_bench/caches_bench_test.go:111 +0x74
testing.(*B).RunParallel.func1(0xc0000145c0, 0xc0000145b8, 0xc0000145b0, 0xc000094000, 0xc0000541d0)
	/usr/lib/go/src/testing/benchmark.go:626 +0x97
created by testing.(*B).RunParallel
	/usr/lib/go/src/testing/benchmark.go:619 +0x192

goroutine 145 [running]:
	goroutine running on other thread; stack unavailable
created by testing.(*B).RunParallel
	/usr/lib/go/src/testing/benchmark.go:619 +0x192

goroutine 148 [running]:
	goroutine running on other thread; stack unavailable
created by testing.(*B).RunParallel
	/usr/lib/go/src/testing/benchmark.go:619 +0x192

goroutine 149 [running]:
	goroutine running on other thread; stack unavailable
created by testing.(*B).RunParallel
	/usr/lib/go/src/testing/benchmark.go:619 +0x192

goroutine 150 [running]:
	goroutine running on other thread; stack unavailable
created by testing.(*B).RunParallel
	/usr/lib/go/src/testing/benchmark.go:619 +0x192

goroutine 151 [running]:
	goroutine running on other thread; stack unavailable
created by testing.(*B).RunParallel
	/usr/lib/go/src/testing/benchmark.go:619 +0x192

goroutine 152 [running]:
	goroutine running on other thread; stack unavailable
created by testing.(*B).RunParallel
	/usr/lib/go/src/testing/benchmark.go:619 +0x192
exit status 2
FAIL	github.com/allegro/bigcache/caches_bench	375.990s

go env:

GOARCH="amd64"
GOBIN=""
GOCACHE="/home/anders/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/anders/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/lib/go"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build910620148=/tmp/go-build -gno-record-gcc-switches"

Alias key

Is it possible to have two keys point to same cache entry?

Should MaxEntrySize include the size of the key?

Hello,

When setting the MaxEntrySize should I include the size of the key or just the "value" size? My use case only requires checking for the existence of a key so my values are always set to nil.

LifeWindow: 0 <- is this valid?

  1. I would like to have unexpired LifeWindow but only through LRU removal. Is it possible with LifeWindow: 0,?

  2. is bigcache production ready?

config := bigcache.Config {
// number of shards (must be a power of 2)
Shards: 1024,
// time after which entry can be evicted
LifeWindow: 0,
// rps * lifeWindow, used only in initia

Push emptyBlobLen inside the allocateAdditionalMemory() function

Hi There,

I am trying to learn how bigcache works. May I know the reason why to push "emptyBlobLen" inside the allocateAdditionalMemory() function? What is the purpose of doing that? I have also noticed the constant minimumEmptyBlobSize and its comment: "Minimum empty blob size in bytes. Empty blob fills space between tail and head in additional memory allocation.". However, I still could not figure it out the reason why to do that?

Thanks,

Iterator: index out of range

Has anyone else seen this issue when using the iterator? It only happens occasionally and I haven't found a way to reproduce this:

panic: runtime error: index out of range

goroutine 186467 [running]:
encoding/binary.littleEndian.Uint64(...)
        /usr/local/go/src/encoding/binary/binary.go:76
myapp/vendor/github.com/allegro/bigcache.readTimestampFromEntry(...)
        /go/src/myapp/vendor/github.com/allegro/bigcache/encoding.go:45
myapp/vendor/github.com/allegro/bigcache.(*EntryInfoIterator).Value(0xc024444050, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
        /go/src/myapp/vendor/github.com/allegro/bigcache/iterator.go:117 +0x555
myapp/cache.GetAllEntries.func1(0xc004b3d020)
        /go/src/myapp/cache/cache.go:41 +0x9f
created by myapp/cache.GetAllEntries
        /go/src/myapp/cache/cache.go:38 +0x58

My code looks like this:

func GetAllEntries() <-chan []byte {
	entries := make(chan []byte)
	go func() {
		iter := cache.Iterator()
		for iter.SetNext() {
			if entry, err := iter.Value(); err == nil {
				entries <- entry.Value()
			}
		}
		close(entries)
	}()
	return entries
}

Add memory allocation hard limit

I would like to have a hard limit for max cache size, so for example my application won't be killed by OOMKiller, losing all cache data.

Using bigcache with google appengine

I'm trying to use bigcache with google appengine (go1.9) however I got the below error

go-app-builder: Failed parsing input: parser: bad import "unsafe" in github.com/allegro/bigcache/bigcache.go from GOPATH

After trying some options I was able to solve the problem by not using the "unsafe" package in "encoding.go/bytesToString"

The original function was

func bytesToString(b []byte) string {
	bytesHeader := (*reflect.SliceHeader)(unsafe.Pointer(&b))
	strHeader := reflect.StringHeader{Data: bytesHeader.Data, Len: bytesHeader.Len}
	return *(*string)(unsafe.Pointer(&strHeader))
}

I have changed it to:

func bytesToString(b []byte) string {
	return string(b)
}

My question is:

Is it ok doing this change? Do you think I will face any issues?

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.