Giter Club home page Giter Club logo

redisc's People

Contributors

iwanbk avatar letsfire avatar mkabischev avatar mna avatar tysonmote avatar wojas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

redisc's Issues

Proposal: add `Cluster` method to split keys by nodes

Description

Add a Cluster.SplitByNode(keys ...string) [][]string that takes a list of Redis keys and returns them grouped by keys that belong to the same node. It is similar to SplitBySlot, but while SplitBySlot is guaranteed to return groupings of keys that definitely belong together (as they live in the same hash slot), SplitByNode takes a more optimistic approach and returns groupings of keys that are likely to live on the same node, even if they don't necessarily share the same hash slot.

It does so by using the current known slot-to-node mapping of the Cluster (which is why the new function must be a method on the Cluster type, it needs access to its mapping). If the cluster has remained stable since the last time the mapping was updated, and remains stable until the commands for those keys are executed, then it can significantly improve performance as more keys can be grouped together using the same connection. If the cluster has not remained stable, then it will result in MOVED errors and it can be handled as per the usual mechanisms of the redisc package, either manually or automatically via a RetryConn.

It is a new API and as such could be released as part of a minor version.

Use Case

While in general care should be taken to design keys so that related data lives in the same key slot (e.g. {user:123}:foo and {user:123}:bar, so that both foo and bar data for user 123 live in the same slot), it is sometimes useful, to run a number of commands that cannot fit this restriction (e.g. an admin user may want to retrieve all users' foos). In this case, the execution of commands can be optimized with SplitByNode.

Implementation Concerns

What if at the moment this is called, not all slots are mapped to a node? It could return an error, but that would make it troublesome to use that API - the calling code would always have to prepare for the error case and have a fallback scenario (which would likely be to use SplitBySlot). Another option would be to return a random node for the slots that are not covered by a node, but that's suboptimal as it is almost sure that those keys will result in a MOVED error.

A fair option, I think, would be for SplitByNode to return the same groupings as SplitBySlot for the keys that are not covered by any node. So it always returns valid groupings of keys, the calling code is the same whether the internal mappings are complete or partial, and the groupings are as optimal as they can be - the keys with a valid mapping are properly grouped, and those without a mapping are grouped by their slot so that when executed, they will return at most a single MOVED error (for the first command) and after following that redirection all keys in that grouping will be on the right node.

If the Cluster is closed, instead of returning an error, simply call SplitBySlot instead. The "closed cluster" error will be returned when attempting to run any command on that cluster.


(note that this proposal is not a promise of implementation, nor is it a promise of merging a PR that would implement this feature - it is first and foremost for me to give it some thought and as a reminder when I have some time to work on the package, and published for visibility)

Proposal: relax the constraints on pipeline for the `RetryConn`

Description

When the package was initially created, as a measure of precaution due to potentially tricky and problematic semantics, the automatic following of redirections (that are available via the connection returned by the RetryConn API function) disallow sending pipelined commands (i.e. Send, Flush and Receive always return an error).

This proposal is to reconsider this restriction and see if it would make sense for a retry connection to support the full redis.Conn API.

This is technically an internal implementation detail (no changes in the public API), but it enables new features that would be valid in a minor release. That being said, it would likely add significant complexity to the RetryConn implementation, and as such would be best released as a distinct minor version than the other proposals, as it is more risky (so that package users can decide if they want those changes or not).

Use Case

The main use-case would be to execute a pipeline of commands based on groupings of keys split either by the existing SplitBySlot API function, or by the proposed SplitByNode function. Executing commands that are known to (or very likely to) live on the same node, it makes sense to send them as a pipeline for more efficient execution. If a MOVED error is encountered, either all commands are guaranteed to work on the new target (if SplitBySlot was used), or are very likely to work on the new target (if SplitByNode was used, and one of the slots has moved, it's likely that the primary has failed and a replica is taking over, in which case the replica would take all the slots that were served by the primary). So for the most likely cases, the pipeline would require at most one redirection to be followed.

Note that while a RetryConn can be used to run via conn.Do any arbitrary commands with no concern for where the keys live (e.g. it may follow redirections for every single command called with conn.Do if someone is not careful in the keys design), doing so is both not efficient and not good application design and is not something that redisc wants to encourage or support explicitly (it does support it implicitly as just mentioned, but that is just a side-effect of the main feature of following redirections in case of a cluster failover). As such, for the same reasons, pipeline support in RetryConn would not attempt to optimize execution of the redirections it encounters (e.g. by running them all in parallel using goroutines or something like that). It assumes that the full pipeline is either guaranteed or very likely to be redirected to the same node if a redirection happens, and would retry the rest of the pipeline accordingly.

Implementation Concerns

There are a number of concerns to test and investigate before going forward with this proposal, some that are highly likely not an issue, and others that may be more tricky:

  • sending the following, if it fails after the 2nd command, should not repeat the start of the pipeline: GET a, INCR b, GET c.
  • calling conn.Do("") should return the last received value (or the first error encountered).
  • conn.Receive should work properly, but that means maintaining the connections after following redirections (or most likely storing the results to be returned)
  • how would conn.Flush work? Likely it would flush to the initial conn. Can more Send calls be made after flush? To test with redigo.
  • If used improperly (eg. with no concern for keys locations), it could very well be less efficient than not using a pipeline (e.g. send the full pipeline to node 1, first reply is a MOVED, send the full pipeline to node 2, second reply is a MOVED, etc...)
  • Does executing a pipeline stop at the first error, or does it keep running all subsequent commands? If it keeps running, it means the pipelined commands could execute in a different order (e.g. run cmd 1, cmd 2 returns a MOVED error, cmd 3 runs, then it retries cmd 2 on the different node). Could this break the expected results, e.g. if cmd 3 depends on cmd2 (I'm not sure this can really happen given that they'd be distinct keys)?

(note that this proposal is not a promise of implementation, nor is it a promise of merging a PR that would implement this feature - it is first and foremost for me to give it some thought and as a reminder when I have some time to work on the package, and published for visibility)

concurrent map iteration and map write

error as belows:

fatal error: concurrent map iteration and map write

goroutine 244 [running]:
runtime.throw(0xd7b264, 0x26)
	C:/go/src/runtime/panic.go:617 +0x72 fp=0xc00024adb0 sp=0xc00024ad80 pc=0x42c262
runtime.mapiternext(0xc00024af58)
	C:/go/src/runtime/map.go:860 +0x597 fp=0xc00024ae38 sp=0xc00024adb0 pc=0x40ea67
runtime.mapiterinit(0xc3b460, 0xc000157ad0, 0xc00024af58)
	C:/go/src/runtime/map.go:850 +0x1c9 fp=0xc00024ae58 sp=0xc00024ae38 pc=0x40e3d9
github.com/mna/redisc.(*Cluster).refresh(0xc000414000, 0x1f01, 0xc000282300)
	E:/ivs_proj/go_default_path/pkg/mod/github.com/mna/[email protected]/cluster.go:81 +0x1e7 fp=0xc00024afc8 sp=0xc00024ae58 pc=0x8f5727
runtime.goexit()
	C:/go/src/runtime/asm_amd64.s:1337 +0x1 fp=0xc00024afd0 sp=0xc00024afc8 pc=0x458a21
created by github.com/mna/redisc.(*Cluster).needsRefresh
	E:/ivs_proj/go_default_path/pkg/mod/github.com/mna/[email protected]/cluster.go:154 +0xc9

redisc: too many attempts

Dear Sir
My redis is Azure redis service.
Redis version was 3.2.7, clustered 3 maters and 3 slaves

I tried to get the connection and do command
redisConnection.Do("GET", key)
sometimes works, sometimes return redisc: too many attempts

hear are some related codes, which i used to create the connection.
where I setup error ? thank you very much

func redisClusterNewClient() redis.Conn {
	//return cluster.Get()
	retryConn, err := redisc.RetryConn(cluster.Get(), 3, 1*time.Millisecond)
	if err != nil {
		logger.New().Error(err.Error())
	}
	return retryConn
}

func initRedisCluter() {
	cluster = &redisc.Cluster{
		StartupNodes: []string{"my_azure_redis_service"},
		DialOptions:  initRedisDialOptions(),
		CreatePool:   createPool,
	}
	// initialize its mapping
	if err := cluster.Refresh(); err != nil {
		logger.New().Error("initial cluster refresh error")
	}
}


func createPool(addr string, opts ...redis.DialOption) (*redis.Pool, error) {
	return &redis.Pool{
		MaxIdle:     16,
		MaxActive:  32,
		IdleTimeout: 100 * time.Second, 
		Dial: func() (redis.Conn, error) {
			c, err := redis.Dial("tcp", configs.Get(confGroupName, confNodeAddresses).String(""), opts...)
			if err != nil {
				return nil, err
			}
			return c, nil
		},
		TestOnBorrow: func(c redis.Conn, t time.Time) error {
			_, err := c.Do("PING")
			return err
		},
	}, nil
}

Add support for Pipeline and multi operation commands

The Redis cluster driver is a fundamental module for projects that use the Redis cluster without proxy. Redisc is the best cluster driver for Golang, I think, after I sorted out all solutions. But we used pipeline and mget/mset that are not supported in Redisc in our project. So, we added support for them at rmker/redisc@04ab71f. It was tested well and looks like working fine in my project.
I'm not sure if this matches the Redisc design philosophy. If you have interest, I could create a PR for it.

Proposal: support Redis 7+ Cluster changes

Description

Starting with Redis 7, CLUSTER SLOTS is considered deprecated (but is still supported and works as before), and CLUSTER SHARDS is the new recommended way for cluster clients to discover the cluster topology.

This new command also supports a more flexible way for Redis admins to configure how clients are expected to connect to the cluster nodes, via the new cluster-preferred-endpoint-type configuration option. See the command's documentation for details.

For backwards compatibility (that is, redisc compatibility with older Redis versions), redisc should still support running CLUSTER SLOTS. The proposed approach is as follows:

  • Add Cluster.RedisMajor and Cluster.RedisMinor fields (ints) to explicitly state the version of Redis that will be used for the cluster. The minor version can be set to <0 if unknown/unspecified/irrelevant. If set to major version 7+, the cluster will use CLUSTER SHARDS, otherwise CLUSTER SLOTS. This should of course be set before using the cluster. Specifying the version allows similar upgrades/evolution in the future.
  • If the version fields are not set (both are <=0), if the WaitForCluster proposal is implemented, automatically read the version from Redis using INFO server. Otherwise, or if the Redis version has not been loaded yet, if Refresh gets called, do the same thing (read the version using INFO server).
  • Otherwise, if the version is not set explicity and neither WaitForCluster nor Refresh gets called, keep using CLUSTER SLOTS (assume Redis < 7).

The rationale for the auto-detection is that it may not be convenient to explicitly set the Redis version in code when the application e.g. supports different Redis versions or uses different ones in different environments. Doing so during WaitForCluster makes sense because the application is explicitly saying that it is a good place to block for a bit (although it does consume part of the deadline allocated to wait for the cluster, but the implementation should always check for the cluster's stability at least once before returning).

The same can be said when Refresh is called, as it is highly recommended to call it before using the cluster. Finally, if none of those are called and the new fields are not set, then the fallback is to work as before (Redis < 7). The application should change its code to either call those explicitly, or set the new fields.

As this new API consists of new fields and new internal implementation details, it could be released as part of a minor version.

Use Case

This is required to support the new Redis 7+ feature of cluster-preferred-endpoint-type configuration option, and to prepare in case CLUSTER SLOTS support gets removed in a future Redis version.

Implementation Concerns

What if in auto-detection mode INFO server fails? I think that in this case any other command would likely fail too (e.g. network error), so return the error from WaitForCluster or Refresh, prompting the application to handle that failure. If it keeps running despite the error, then it will behave as for Redis < 7.

Is the minor version really relevant in the new API? I'd like to say no and we can do without it, but Redis has a history of releasing important changes in minor versions. Even recently, for example, ACL SETUSER has seen important additions in 6.2 (https://redis.io/commands/acl-setuser/).


(note that this proposal is not a promise of implementation, nor is it a promise of merging a PR that would implement this feature - it is first and foremost for me to give it some thought and as a reminder when I have some time to work on the package, and published for visibility)

SSL dial in cluster mode dose not work

This is related to my closed issue: #13

somehow in redigo, for SSL dial I just need to set redis.DialUseTLS(true) and everything works fine, but in redisc, we need to set both redis.DialUseTLS(true) and redis.DialTLSSkipVerify(true), not sure why it can not verify server cert while redigo can

This seems to be must needed

Thanks
Rui

Support address mapping for nodes

Description

Currently when connecting to a cluster, the node addresses are given initially. But when a call to Refresh is made, the nodes may report a different IP than those initially used. This can happen, for example, when running the cluster inside of docker and trying to connect from outside of docker. To account for this, a node address mapping could be given, so that redisc will map the addresses reported by the nodes to different addresses. This is currently done in a node redis module: node-redis.

Use case

When developing locally, it can be useful to have a cluster running using docker, but running tests from the host machine. This is currently an issue I am experiencing due to how our integration tests are run.

Proposal: improve handling of read-only connections

Description

As raised in #47 , the redisc.ReadOnlyConn function must be used to take advantage of replicas in a Redis cluster. However, this causes a READONLY redis command to be executed every time such a connection is obtained from the cluster (Cluster.Get), and a READWRITE redis command to be executed when that connection is closed so that it is returned to the pool in "vanilla" state (which is important in case e.g. the node targeted by that pool gets promoted to a primary at some point).

This proposal is to add a way to avoid those extra READONLY/READWRITE calls each time a connection is retrieved and returned to the pool, so that it can be emitted only once when the connection is created, and stay in read-only mode forever in that pool - or until the node gets promoted to a primary (though it should be tested what happens if a connection marked as "read-only" executes commands to a primary node, maybe it works fine given that the node is not a replica).

The proposed way to implement this is as an alternate way to set READONLY, by adding a Cluster.AssumeReadOnlySet boolean field. If not set (and by default), it uses the same approach as the current implementation. However, if set to true, the improved approach is enabled. What it does is that:

  • When getConn is called to actually get a connection from the pool, it calls redis.Pool.GetContext and stores whether a read-only connection is requested in the context (using a private, unexported key). It does this only if the address used is actually a replica (at least at the point in time when the connection is requested, as far as the client knows);
  • A new redisc.ReadOnlyRequested(ctx context.Context) bool function is added to be able to check if the context indicates a read-only connection request;
  • The user of redisc must set the redis.Pool.DialContext (https://pkg.go.dev/github.com/gomodule/redigo/redis#Pool) in the pools it creates via Cluster.CreatePool (alternately, if the pool returned by CreatePool does not have this field set nor Pool.Dial, a default implementation could be used automatically);
  • In that custom DialContext function, it can check for redisc.ReadOnlyRequested(ctx), and if true, emit the READONLY command after dialing a new connection;
  • The cluster will not emit another READONLY command, it will assume it was done by the caller; similarly, it will not emit a READWRITE on connection close, so it stays in read-only mode in the pool and on subsequent use.

Use Case

The use case is pretty well explained in the description, it is an optimization to avoid unnecessary network round-trips when taking advantage of reads on replicas via the ReadOnlyConn feature.

Note that the proposal is only concerned with Cluster configured with a CreatePool function and when using Cluster.Get. It is explicitly not concerned with the use of non-pooled connections (i.e. Cluster.Dial or Cluster.Get when Cluster.CreatePool is nil). When using a non-pooled connection, there is almost no optimization possible (the READONLY command must be executed once, which the current implementation would do - the only small optimization possible is the unnecessary READWRITE call, but that call can be avoided in the more common case by adding a check if the connection was obtained via Cluster.Dial).

Implementation Concerns

  • Check what happens if a replica gets promoted to a primary and the pool contains read-only connections. Do they work properly despite the "read-only" flag set?
  • What if a replica gets promoted to a primary and later goes back to a replica, does the presence of read-only connections cause any issues? I don't think it does but think this through properly.
  • In getRandomConn, it should set readOnly to false if getNodeAddrs does not return the replicas. getConnForSlot does it correctly.
  • Any potential issue with a retry conn?

(note that this proposal is not a promise of implementation, nor is it a promise of merging a PR that would implement this feature - it is first and foremost for me to give it some thought and as a reminder when I have some time to work on the package, and published for visibility)

is HMSET operation supported, when using redis clustermode enabled?

currently i am using redis with cluster mode enabled with following config,
cluster := redisc.Cluster{
StartupNodes: []string{"******.clustercfg.use1.cache.amazonaws.com:6379"},
DialOptions: []redis.DialOption{redis.DialConnectTimeout(5 * time.Second)},
CreatePool: createPool,
}

and doing the following operation:
hmsetargs := make([]interface{}, len(tags)+1)
hmsetargs[0] = constant.ROOT_KEY
for k, v := range tags {
hmsetargs = append(hmsetargs, k, v)
}
tempcon := pool.Get()
// make it handle redirections automatically
conntoset, err := redisc.RetryConn(tempcon, 3, 100*time.Millisecond)
if err != nil {
log.Error().Err(err).Msg("RetryConn failed: ")
}

_, err = conntoset.Do("hmset", hmsetargs...)
tempcon.Close()

and i am seeing this error:
"error":"ERR wrong number of arguments for HMSET"

High Connection Churn

Hi,

We are seeing a high connection churn when using this library with a AWS Elasticache cluster (2 masters with 2 slave nodes each)

Below is the code snipped for creating the pooled connection

redis/pool.go

package redis

// other imports excluded
import (
    "bitbucket.org/my_projects/go_sample/common"
    "bitbucket.org/my_projects/go_sample/config"
    "github.com/gomodule/redigo/redis"
    "github.com/mna/redisc"
)

var (
    // RedisPoolLocationBroker is used to provide access to and manage connection pool
    RedisPoolLocationBroker *redisc.Cluster
)

func init() {
    common.ServerLogger.Printf("Redis Location Broker Connection Pool: Initializing with config - \n%+v\n", config.AppConfig)
    RedisPoolLocationBroker = newLocationBrokerPool(config.AppConfig)
    // initialize its mapping
    if err := RedisPoolLocationBroker.Refresh(); err != nil {
        common.ServerLogger.Fatalf("Refresh failed: %v", err)
    }
    common.ServerLogger.Printf("Redis Location Broker Connection Pool Stats: %v\n", RedisPoolLocationBroker.Stats())

}


func newLocationBrokerPool(config config.AppConfiguration) *redisc.Cluster {
    timeout := 50 * time.Millisecond
    options := []redis.DialOption{
        redis.DialConnectTimeout(timeout),
        redis.DialReadTimeout(timeout),
        redis.DialWriteTimeout(timeout),
    }

    return &redisc.Cluster{
        StartupNodes: []string{config.RedisLocationBroker},
        DialOptions:  options,
        CreatePool:   createLocationBrokerPool, //(config.RedisLocationBroker, options...),
    }
}

func createLocationBrokerPool(addr string, opts ...redis.DialOption) (*redis.Pool, error) {
    return &redis.Pool{

        MaxIdle:     config.AppConfig.PoolSize,
        IdleTimeout: time.Duration(config.AppConfig.IdleTimeout) * time.Second,
        MaxActive:   config.AppConfig.MaxActiveConn,

        Dial: func() (redis.Conn, error) {
            conn, err := redis.Dial(config.AppConfig.NetworkProtocol, addr)
            if err != nil {
                common.ServerLogger.Printf("ERROR: fail initializing the redis location broker pool: %s", err.Error())
                return nil, err
            }
            return conn, err
        },

        TestOnBorrow: func(c redis.Conn, t time.Time) error {
            _, err := c.Do("Conn Failure")
            return err
        },
    }, nil
}

Sample usage

package test

// other imports excluded
import (
	"bitbucket.org/my_projects/go_sample/common"
	sfxRedis "bitbucket.org/my_projects/go_sample/redis"
	redRedis "github.com/gomodule/redigo/redis"
	"github.com/mna/redisc"
)

func UpdateDeviceLocation(deviceId, latitude, longitude, accuracy, updateTimestamp string) string {
	var message string

	redisConnectionLocationBroker := sfxRedis.RedisPoolLocationBroker.Get()
	defer redisConnectionLocationBroker.Close()
	common.ServerLogger.Printf("Redis Connection Pool Location Broker Stats: %v\n", sfxRedis.RedisPoolLocationBroker.Stats())

	// make it handle redirections automatically
	redisConnectionLocationBrokerRetry, redisConnectionLocationBrokerRetryErr :=
		redisc.RetryConn(redisConnectionLocationBroker, 3, 100*time.Millisecond)
	if redisConnectionLocationBrokerRetryErr != nil {
		common.ServerLogger.Fatalf("RetryConn failed: %v", redisConnectionLocationBrokerRetryErr)
	}

	deviceKey := "device:" + deviceId
	deviceLocationKey := deviceKey + ":location"

	timestampLayout := "2006-01-02 15:04:05"
	updateTime, _ := time.Parse(timestampLayout, updateTimestamp)

	minAllowedTime := time.Now().UTC().Add(-5 * time.Minute)

	globalKey := "global"
	currentTime := time.Now().UTC()

	// updating the current lat longs if they are accurate
	if minAllowedTime.Before(updateTime) {
		if currentTime.Before(updateTime) {
			updateTime = currentTime
		}
		_, geoset_err := redisConnectionLocationBrokerRetry.Do("GEOADD", globalKey, longitude, latitude, riderKey)
		_, loc_time_err := redisConnectionLocationBrokerRetry.Do("HSET", deviceLocationKey, "last_seen", updateTime.Format(timestampLayout))
		_, loc_lon_err := redisConnectionLocationBrokerRetry.Do("HSET", deviceLocationKey, "longitude", longitude)
		_, loc_lat_err := redisConnectionLocationBrokerRetry.Do("HSET", deviceLocationKey, "latitude", latitude)
		if accuracy != "" {
			_, loc_acc_err := redisConnectionLocationBrokerRetry.Do("HSET", deviceLocationKey, "accuracy", accuracy)
			if loc_acc_err != nil{
				common.ServerLogger.Printf("Errors ", geoset_err, loc_time_err,  loc_lat_err, loc_lon_err, loc_acc_err)
				message = "Location update failed"
			}
		}
		if geoset_err != nil || loc_time_err != nil ||  loc_lat_err != nil || loc_lon_err != nil{
			common.ServerLogger.Printf("Errors ", geoset_err, loc_time_err,  loc_lat_err, loc_lon_err)
			message = "Location update failed"
		}
		message = "Location Updated Successfully"
	} else {
		message = "Location is too old"
	}
	return message
}

Config
{
"RedisLocationBroker": "<elasticache_ip>:6379",
"PoolSize": 100,
"MaxActiveConn": 100
"IdleTimeout": 60,
"NetworkProtocol": "tcp"
}

We are seeing around 10-12k connections active in elasticache. We are migrating from fzzy radix library. The same code and elasticache end point had barely 200-300 connections>

Please advise on what could be wrong.

READONLY command on every requests

Hello,
I'm currently starting to implement this library on our system. Using default cluster implementation with 3 masters and 3 replicas. One of the main things that I want to implement is using only replica servers for all readonly command. Based on the implementation of redisc, this can be done by calling redisc.ReadOnlyConn(conn) before calling each readonly command.

This approach works, but on redis monitor it seems the READONLY command is called on every request. This doesn't seem efficient (considering on high RPS this will add not-insignificant latency), so is it possible to change the behavior to only call READONLY at the start of connection creation?

I've tried creating separate pool for readonly process, and take the connection from that, but so far it doesn't work as I expected. Or is there anything else I can try?

Connection failes if cluster consists only from one master

If redis cluster consists only from one master node, this lib doesn't works. If there is >1 master node it works fine.

{"level":"error","ts":"2020-04-24T04:43:44.210Z","msg":"couldn't get cache from redis for account 1, reason: redisc: failed to get a connection"}

Main problem is for dev environments

Add support for redigo's Pool.GetContext (to support waiting with timeout when pool is full)

Default behaviour of redigo Pool is to simply returns invalid connection
when the pool is full/exhausted (no available connection).

The problem is described in more details at gomodule/redigo#56

We could add a kind of blocking with timeout when getting connection from redigo pool.
So, instead of simply returns invalid connection, we could wait first for a configurable amount of time.

It could be implemented using sempahore (from https://godoc.org/golang.org/x/sync/semaphore).

I have some implementation at tokopedia@2fa113e.

If you have interest, i could create a proper PR with proper tests

redisc.closed error

Hi ,
When I run the follow code I get a redisc.Connection closed error. Working on an Amazon cluster with 9 nodes(3 masters and each master has two slaves.) .Error I get is SET failed: redisc: closed.
It works when I comment out the defer cluster.Close and defer Conn.close()
`

package main
import (
"github.com/garyburd/redigo/redis"
 "github.com/mna/redisc"
 "github.com/davecgh/go-spew/spew"
 "log"
"time"
)

var (
pool    *redis.Pool
)
func createPool(addr string, opts ...redis.DialOption) (*redis.Pool, error) {

     return &redis.Pool{
	MaxIdle:     5,
	MaxActive:   10,
	IdleTimeout: time.Minute,
	Dial: func() (redis.Conn, error) {
            log.Println(addr)
		c, err := redis.Dial("tcp", addr, opts...)
                    if err != nil {
			log.Panic(err)
		}
		return c, err
	},
	TestOnBorrow: func(c redis.Conn, t time.Time) error {
		_, err := c.Do("PING")
		return err
	},
}, nil

}
func redisConnection() redis.Conn {  
  cluster := redisc.Cluster{
                StartupNodes: []string{"*****.clustercfg.usw2.cache.amazonaws.com:6379"},
	    DialOptions:  []redis.DialOption{redis.DialConnectTimeout(5 * time.Minute)},
                CreatePool:createPool,  
    }
   defer cluster.Close()
   if err := cluster.Refresh(); err != nil {
	log.Fatalf("Refresh failed: %v", err)
    }else{
        log.Println("Refresh worked")
    }
conn := cluster.Get()
    defer conn.Close()

if(conn != nil){
    rc, err := redisc.RetryConn(conn, 9, 100*time.Millisecond)
    if(err == nil){
        return rc
    }else{
        spew.Dump(err)
        return nil
    }
 }else{
    log.Println("Conn is nil")
    return nil   
    }
return nil
}
func main() {
    rcon := redisConnection()
    defer rcon.Close()
    _,err1 := rcon.Do("SET", "mykey", "23")
if err1 != nil {
	log.Fatalf("SET failed: %v", err1)
}
    v, err := rcon.Do("GET", "mykey")
if err != nil {
	log.Fatalf("GET failed: %v", err)
    }else{
        log.Println("My val",v)
  }
}

`

Does it support redis module command?

I want to exec redis module command in redis cluster. Does this cluster client support redis module command, for example redisearch?
Thank you very much.

Redisc Error - MOVED 15148 127.0.0.1:30003

I am trying to use redisc. I have followed the example given in example_test.go. My code looks like below:-

package main

import (
	"github.com/PuerkitoBio/redisc"
	"github.com/garyburd/redigo/redis"
	"log"
	"time"
)

const script = `if redis.call("EXISTS", KEYS[1]) == 1 then
    local keyvalues = redis.call("HGETALL", KEYS[1])
    local a = {}
    for i=2, table.getn(ARGV) do
      a[i-1] = ARGV[i]
    end
    local res = redis.call("HMSET", KEYS[1], unpack(a))
    redis.call("EXPIRE", KEYS[1], ARGV[1])  
    return keyvalues
else
    return 2 -- "Key doesn't exists"
end`

func main() {
	cluster := redisc.Cluster{
		StartupNodes: []string{":30001", ":30002", ":30003", ":30004", ":30005", ":30006"},
		DialOptions:  []redis.DialOption{redis.DialConnectTimeout(5 * time.Second)},
		CreatePool:   createPool,
	}
	defer cluster.Close()

	// initialize its mapping
	if err := cluster.Refresh(); err != nil {
		log.Fatalf("Refresh failed: %v", err)
	}

	// grab a connection from the pool
	conn := cluster.Get()
	defer cluster.Close()
	rScript := redis.NewScript(1, script)
	argv := make([]string, 5)
	argv[0] = "30000"
	argv[1] = "SSF_lastAccessedDate"
	argv[2] = "1481627386"
	argv[3] = "SSF_expiryDate"
	argv[4] = "2481657386"
	reply, errS := rScript.Do(conn, "JJNb324a680c35d11e6a1123c15c2d271f21481871788G", argv)
	if errS != nil {
		log.Println("Error in executing script " + errS.Error())
	} else {
		log.Printf("Result %+v", reply)
	}
}

func createPool(addr string, opts ...redis.DialOption) (*redis.Pool, error) {
	return &redis.Pool{
		MaxIdle:     100,
		MaxActive:   4000,
		IdleTimeout: time.Minute,
		Dial: func() (redis.Conn, error) {
			return redis.Dial("tcp", addr, opts...)
		},
		TestOnBorrow: func(c redis.Conn, t time.Time) error {
			if time.Since(t) < time.Minute {
				return nil
			}
			_, err := c.Do("PING")
			return err
		},
	}, nil
}

But on running the code it is throwing the below error:-

2016/12/16 12:39:37 Error in executing script MOVED 15148 127.0.0.1:30003

Can someone let me know what I am doing wrong?

Send PSUBSCRIBE failed

conn := getConn()
defer conn.Close()
psc := redis.PubSubConn{Conn: conn}
if err := psc.Subscribe("__keyevent@0__:expired"); err != nil {
        logger.Error(err.Error())
  }

this error showed :
redisc: unsupported call to Flush

tried single node redis with redigo connection, it worked.
anything I missed when create the pub/sub conn ?
thanks

Proposal: add `Cluster` method to wait for stable Redis cluster

Description

Add a Cluster.WaitForCluster(ctx context.Context) error method that calls the CLUSTER INFO Redis commands at intervals until it returns cluster_state:ok or the context expires. The call blocks until the cluster is stable or the context expires, and on expiration it returns the context's error (ctx.Err()). On success (once a call to CLUSTER INFO returns cluster_state:ok), it returns nil.

It is a new API and as such could be released as part of a minor version.

Use Case

It is recommended to call Cluster.Refresh() at the start of an application, so that the first Redis connections already benefit from smart routing. However, if the redis cluster is still being setup and created at the same time the application starts, it is possible that the CLUSTER SLOTS Redis command (called by Cluster.Refresh()) only returns partial information, if the cluster is not yet created or stable, resulting in potentially slower calls for the first few connections, or even failures if by the time the first connections are made, the cluster is still not stable.

A typical use for this new API would be to call Cluster.WaitForCluster(ctx) at the start of the application, before calling Cluster.Refresh(), so that the full slots-to-node mapping is known before the first use. The timeout is defined by the caller (typically via context.WithDeadline() or context.WithTimeout()) as there's no single good value that the package could use for this.

Of course the method can also be called in other contexts, but the benefit is less obvious. However, it would still work as expected in the sense that it would always block until the call expires or Redis replies with cluster_state:ok.

Implementation Concerns

How should the polling interval work? I don't want to complicate the API and make that configurable, as in the main use-case (at the start of the application), the interval should not be a critical thing (there shouldn't be much load on the redis servers, there's no "thundering herd" concern, etc). My guess would be to use something similar to Go in net/http/server.go (Server.Serve method), where it retries the calls to Accept by starting small (5ms, though in the case of redisc, something like 100ms might be a better initial retry interval) and doubling until it reaches 1s.


(note that this proposal is not a promise of implementation, nor is it a promise of merging a PR that would implement this feature - it is first and foremost for me to give it some thought and as a reminder when I have some time to work on the package, and published for visibility)

Unstable redistest.StartCluster due to redis "address already in use"

We use the redistest cluster implementation in our tests. And see often some unstable starts when the redis-server exited prematuraly due to "address already in use" after the free port is "acquired" by the getFreePort function. Probably it Closes the underlaying os connection filedescriptor on a later time (see go1.10beta1 - blog.gopheracademy.com blogpost (search for Close)). And golang/go issue: golang/go#21856

We currently see some more stable starts (retry on failure) with this change: dualinventive@ca95cd2

Have you seen also this behaviour?
We are running under Debian 8 AMD64 (kernel 3.16.0-4-amd64) if that would help.

Kind regards,
Jerry

Master branch broken for vendorizing

I tried to vendorize with golang/dep but the master branch doesn't build.

jjacobs@dev04:~/go/src/github.com/mna/redisc$ go test .
# github.com/mna/redisc
./cluster.go:396:38: undefined: redis.PoolStats
./cluster.go:400:27: undefined: redis.PoolStats
./cluster.go:403:24: pool.Stats undefined (type *redis.Pool has no field or method Stats)
FAIL    github.com/mna/redisc [build failed]
jjacobs@dev04:~/go/src/github.com/mna/redisc$ git l | head -1
* 2017-10-08 a2dd259 (HEAD, origin/master, origin/HEAD, master) fix test to support new output format of CLUSTER NODES  [Martin Angers]

Clarification regarding startup nodes

We're using elasticache with redis cluster mode enabled and I'm curious which nodes I should be passing in for the StartupNodes.

Here's the situation: AWS assigns one record which contains all the nodes in the cluster. Leaders and replicas alike.

12:46:23 $ dig +short <super-secret>.usw2.cache.amazonaws.com
10.57.88.13
10.57.148.73
10.57.3.211
10.57.74.34
10.57.2.144
10.57.159.224
10.57.28.193
10.57.151.204
10.57.157.217
10.57.75.158
10.57.1.36
10.57.95.50

Each node of course has it's own address, but of course the master nodes can vary over time as there are outages and failures. However, there are no records to represent the current set of master nodes.

Reading the code, I'm particularly confused about these few lines:

https://github.com/mna/redisc/blob/master/cluster.go#L67-L69

It appears to me that we get the cluster mappings as many times as there are masters, proceeding forward only if there's no error when connecting to said master. Would this not mean that we can pass a list of all nodes into StartupNodes relying on the failure mechanic to ensure that the correct mapping eventually gets populated?

Related to the above, would it make sense to resolve the list of A records for a particular DNS address and set the startup nodes to that list?

Thanks and great work on the package!! 💯🎉

Handle redis cluster via redis load balancer URL

Hi

I am using Microsoft Azure redis and it dose not provide each cluster node individual IP address, rather it will only provide the redis cluster URL(which is actually a load balancer, not real redis nodes). Can this lib handle this? if so, I should put this redis URL in startupNodes or in dialoption's address?

Thanks
Rui

Separate pool for masters and slaves?

Helm’s redis and redis-cluster charts seem to create one Kubernetes service for hitting the master(s) and one Kubernetes service for hitting the slave(s). I would like to direct my readonly queries to the slaves and my write queries to the masters.

I’d like to use a redisc Cluster with an underlying Pool. Since I have two different endpoints - one for masters and one for slaves - it seems I need to create two different Clusters, each with their own Pool. Is this correct?

Can redisc recover from master-slave node failover?

Hello, Martin
I am a little confused about the failover between the primary and backup nodes. Redisc relies on the MOVED command to refresh, but when the primary node crashes, the backup node will eventually be promoted to the primary node, and Redisc still only communicates with the crashed primary node, which means It will never get a MOVED response and cannot be refreshed. The system will not be able to heal itself. Did I misunderstand something? Or is there a way to handle such situations and automatically perform failover?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.