Giter Club home page Giter Club logo

redis-in-action's Introduction

Let's make this absolutely clear. Russia Invaded Crimea, where the world did nothing, then Russia again Invaded Ukraine.

If you support Russia's efforts in Ukraine, you can fuck off. I don't care where you come from, what your nationality, religion, etc. is, you can fuck off. No, for real. YOU DO NOT GET A LICENSE IF YOU AGREE WITH THE RUSSIAN GOVERNMENT.

Let's make this absolutely clear. Israel is in the process of engaging in the genocide of the Palestinian people, and have been since before I was born. If you support the continued actions of Israel in Gaza, you can fuck off. I don't care where you come from, what your nationality, religion, etc. is, you can fuck off. No, for real. YOU DO NOT GET A LICENSE IF YOU AGREE WITH THE ISRAELI GOVERNMENT. Hamas doesn't get a pass in their actions, but they aren't a government elected by people.

redis-in-action

===============

This project intends to hold the various implementations of code from the book Redis in Action, written by Josiah Carlson, published by Manning Publications, which is available for purchase: http://manning.com/carlson/

If you would like to read the Errata, it is available as PDF at the above url, or if you would like to see it as HTML; the most recent version in this repository is (hopefully always) available: https://htmlpreview.github.io/?https://github.com/josiahcarlson/redis-in-action/blob/master/excerpt_errata.html

redis-in-action's People

Contributors

1nfrastr avatar agalloch avatar allianzcortex avatar danielsundman avatar dependabot[bot] avatar ervandew avatar ftwbzhao avatar geoand avatar hanmd82 avatar harudark avatar huangzworks avatar inouetakuya avatar jianmingxia avatar josiahcarlson avatar kimi0230 avatar letcafe avatar lpan avatar pardeep-singh avatar paulinohuerta avatar rilma avatar safarishi avatar samluthebrave avatar senjin-hajrulahovic avatar slackpad avatar xvanturing avatar yangkian avatar yuqisun avatar zaq1tomo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

redis-in-action's Issues

Go HMSet method fail

redis_version:3.0.504
When I run Goland Chapter01 redisConn_test.go
I checked redis and found that the results were not stored in redis
I change the code get the following error

func (r *ArticleRepo) PostArticle(user, title, link string) string {
	articleId := strconv.Itoa(int(r.Conn.Incr("article:").Val()))

	voted := "voted:" + articleId
	r.Conn.SAdd(voted, user)
	r.Conn.Expire(voted, common.OneWeekInSeconds*time.Second)

	now := time.Now().Unix()
	article := "article:" + articleId
	_, err := r.Conn.HMSet(article, map[string]interface{}{
		"title":  title,
		"link":   link,
		"poster": user,
		"time":   now,
		"votes":  1,
	}).Result()
	if err != nil {
		fmt.Println(err)
	}

	r.Conn.ZAdd("score:", &redis.Z{Score: float64(now + common.VoteScore), Member: article})
	r.Conn.ZAdd("time:", &redis.Z{Score: float64(now), Member: article})
	return articleId
}

ERR wrong number of arguments for 'hset' command

I have made the following changes to store the results in redis

func (r *ArticleRepo) PostArticle(user, title, link string) string {
	articleId := strconv.Itoa(int(r.Conn.Incr("article:").Val()))

	voted := "voted:" + articleId
	r.Conn.SAdd(voted, user)
	r.Conn.Expire(voted, common.OneWeekInSeconds*time.Second)

	now := time.Now().Unix()
	article := "article:" + articleId
	ret := map[string]interface{}{
		"title":  title,
		"link":   link,
		"poster": user,
		"time":   now,
		"votes":  1,
	}
	for key, value := range ret {
		r.Conn.HSet(article, key, value)
	}

	r.Conn.ZAdd("score:", &redis.Z{Score: float64(now + common.VoteScore), Member: article})
	r.Conn.ZAdd("time:", &redis.Z{Score: float64(now), Member: article})
	return articleId
}

[2.4. Database row caching](https://livebook.manning.com/book/redis-in-action/chapter-2/59) ; 'delay' order set should remove row_id value

def schedule_row_cache(conn, row_id, delay):
    conn.zadd('delay:', row_id, delay)           #A
    conn.zadd('schedule:', row_id, time.time())  #39 

def cache_rows(conn):
    while not QUIT:
        next = conn.zrange('schedule:', 0, 0, withscores=True)  #A
        now = time.time()
        if not next or next[0][1] > now:
            time.sleep(.05)                                     #B
            continue

        row_id = next[0][0]
        delay = conn.zscore('delay:', row_id)                   #C
        if delay <= 0:
            conn.zrem('delay:', row_id)                         #D
            conn.zrem('schedule:', row_id)                      #D
            conn.delete('inv:' + row_id)                        #D
            continue

        row = Inventory.get(row_id)                             #E
        conn.zadd('schedule:', row_id, now + delay)             #F
        conn.set('inv:' + row_id, json.dumps(row.to_dict()))    #53 

if i just want to cache one row_id data in 5s.
whether i need to add one line like this:

def cache_rows(conn):
    while not QUIT:
        next = conn.zrange('schedule:', 0, 0, withscores=True)  #A
        now = time.time()
        if not next or next[0][1] > now:
            time.sleep(.05)                                     #B
            continue

        row_id = next[0][0]
        delay = conn.zscore('delay:', row_id)                   #C
        if delay <= 0:
            conn.zrem('delay:', row_id)                         #D
            conn.zrem('schedule:', row_id)                      #D
            conn.delete('inv:' + row_id)                        #D
            continue

        row = Inventory.get(row_id)                             #E
        conn.zadd('schedule:', row_id, now + delay)             #F
        conn.zadd('delay:', row_id, 0)           #  **** I added one line in this ****
        conn.set('inv:' + row_id, json.dumps(row.to_dict()))    #53 

if not, this origin code will loop to cache this row_id corresponding data

Is it a better idea to replace 'SETNX' directive with ‘SET’?

if conn.setnx(lockname, identifier): #B

as the snippet show, try to add expire to given key is separate operation with setnx, so the chance is that the expire operation will not valid(maybe application crashed just before set expire).

if 'SET' directive support ex and px parameter, is it a better idea to use SET to replace SETNX and EXPIRE ?

def acquire_lock_with_timeout(
        conn, lockname, acquire_timeout=10, lock_timeout=10):
    identifier = str(uuid.uuid4())  # A
    lockname = 'lock:' + lockname
    lock_timeout = int(math.ceil(lock_timeout))  # D

    end = time.time() + acquire_timeout
    while time.time() < end:
        if conn.set(lockname, identifier, ex=lock_timeout, nx=True):
            return identifier

        # the following code are equivalence with Directive:
        #
        # set(key, value, ex=xx, nx=True)

        # if conn.setnx(lockname, identifier):  # B
        #     conn.expire(lockname, lock_timeout)  # B
        #     return identifier
        elif conn.ttl(lockname) < 0:  # C
            conn.expire(lockname, lock_timeout)  # C

        time.sleep(.001)

    return False

java gradlew.bat -Pchapter=1 run 构建失败

错误信息

FAILURE: Build failed with an exception.

* What went wrong:
Could not resolve all dependencies for configuration ':compile'.
> Could not resolve redis.clients:jedis:2.1.0.
  Required by:
      :redis-in-action:1.1
   > Could not GET 'http://repo1.maven.org/maven2/redis/clients/jedis/2.1.0/jedis-2.1.0.pom'. Received status code 501 from server: HTTPS Required
> Could not resolve org.javatuples:javatuples:1.2.
  Required by:
      :redis-in-action:1.1
   > Could not GET 'http://repo1.maven.org/maven2/org/javatuples/javatuples/1.2/javatuples-1.2.pom'. Received status code 501 from server: HTTPS Required
> Could not resolve com.google.code.gson:gson:2.2.2.
  Required by:
      :redis-in-action:1.1
   > Could not GET 'http://repo1.maven.org/maven2/com/google/code/gson/gson/2.2.2/gson-2.2.2.pom'. Received status code 501 from server: HTTPS Required

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.

原因
从2020年1月15日开始,**存储库不再支持通过纯HTTP进行的不安全通信,并且要求对存储库的所有请求都通过HTTPS进行加密。

解决方法
将redis-in-action/java/build.gradle里的maven配置mavenCentral()更改为maven { url"https://repo.maven.apache.org/maven2" } 既可以构建成功了。

repositories {
    //mavenCentral()
    maven { url"https://repo.maven.apache.org/maven2" }

    flatDir {
        dirs 'libs'
    }
}

potential race conditions in Chapter 8 example

First of all, thank you very much for writing such an excellent book. I really enjoy it. 😄 .

I'm wondering if there're some race conditions in Chapter 8 example.

  1. If userA quickly follows two users, userB and userC.
    follow_user(conn, userA, userB) runs first till following, followers, status_and_score = pipeline.execute()[-3:]. Suppose the value of variable following is 1 now.
    Then it switches to function follow_user(conn, userA, userC). It runs till end and sets following in user:userA as 2.
    Then it switches back to continue to run follow_user(conn, userA, userB) and set following in user:userA as 1 again.
    I think either watch or lock is required here since following and some other variables are retrieved in one execute() call and used in another.
def follow_user(conn, uid, other_uid):
    fkey1 = 'following:%s'%uid          #A
    fkey2 = 'followers:%s'%other_uid    #A

    if conn.zscore(fkey1, other_uid):   #B
        return None                     #B

    now = time.time()

    pipeline = conn.pipeline(True)
    pipeline.zadd(fkey1, other_uid, now)    #C
    pipeline.zadd(fkey2, uid, now)          #C
    pipeline.zcard(fkey1)                           #D
    pipeline.zcard(fkey2)                           #D
    pipeline.zrevrange('profile:%s'%other_uid,      #E
        0, HOME_TIMELINE_SIZE-1, withscores=True)   #E
    following, followers, status_and_score = pipeline.execute()[-3:]

    pipeline.hset('user:%s'%uid, 'following', following)        #F
    pipeline.hset('user:%s'%other_uid, 'followers', followers)  #F
    if status_and_score:
        pipeline.zadd('home:%s'%uid, **dict(status_and_score))  #G
    pipeline.zremrangebyrank('home:%s'%uid, 0, -HOME_TIMELINE_SIZE-1)#G

    pipeline.execute()
    return True
  1. Similar situations may happen in unfollow_user function, too.
  2. It's worse if follow_user and unfollow_user overlap.

Please help correct me if I am wrong. Thank you very much.

Connection to Redis is failing in golang/docker-compose.yml

After attempting for a run of testing cases via docker-composed included in golang subdirectory (I'm using Github Codespaces VM configured with Ubuntu 18.04.1; Docker Compose 1.29.2; Go 1.20.2), I find that connection to Redis is failing (see below). I may give a try for solving the problem, unless @YangKian has a better suggestion.

@rilma ➜ /workspaces/redis-in-action/golang (master) $ docker-compose up -d
Creating network "golang_my-test" with driver "bridge"
Building golang
[+] Building 21.2s (11/11) FINISHED                                                                                                                                                            
 => [internal] load build definition from Dockerfile                                                                                                                                      0.4s
 => => transferring dockerfile: 224B                                                                                                                                                      0.0s
 => [internal] load .dockerignore                                                                                                                                                         0.5s
 => => transferring context: 2B                                                                                                                                                           0.0s
 => [internal] load metadata for docker.io/library/golang:1.14-alpine3.11                                                                                                                 1.6s
 => [auth] library/golang:pull token for registry-1.docker.io                                                                                                                             0.0s
 => [1/5] FROM docker.io/library/golang:1.14-alpine3.11@sha256:4f1c80d88c5879067f063770c774a8ffd4de47b684333cdbe9a4ce661931b9b8                                                          10.1s
 => => resolve docker.io/library/golang:1.14-alpine3.11@sha256:4f1c80d88c5879067f063770c774a8ffd4de47b684333cdbe9a4ce661931b9b8                                                           0.3s
 => => sha256:7ae5d4ed80128862597e54747828838e317dacf76670e58dbd9294cc268eb21b 1.36kB / 1.36kB                                                                                            0.0s
 => => sha256:d8bc21febf89d0a2f2937b0e7f35f58d5570a3cdaaa283580551cc565558efab 4.62kB / 4.62kB                                                                                            0.0s
 => => sha256:4f1c80d88c5879067f063770c774a8ffd4de47b684333cdbe9a4ce661931b9b8 1.65kB / 1.65kB                                                                                            0.0s
 => => sha256:01872fc92c6cf715d78171a1b715efc05c9b103364c22cf4649e1d44fe2245bf 153B / 153B                                                                                                0.3s
 => => sha256:780d39f1cd5d8c6428547f47a5737bac30da1feff7c94335f65094ca77e2cebf 299.55kB / 299.55kB                                                                                        0.5s
 => => sha256:0a6724ff3fcd51338afdfdc2b1d4ffd04569818e31efad957213d67c29b45101 2.81MB / 2.81MB                                                                                            0.4s
 => => sha256:875fef68e8ab2a5b953f2425137b92b5c8091fbe79604aed01793184e8efbb65 107.28MB / 107.28MB                                                                                        2.4s
 => => extracting sha256:0a6724ff3fcd51338afdfdc2b1d4ffd04569818e31efad957213d67c29b45101                                                                                                 0.1s
 => => sha256:77ac76ad90fed421c2cb5a03fbf6486e4e8a83390168a7187fde99181b54c5f6 126B / 126B                                                                                                0.7s
 => => extracting sha256:780d39f1cd5d8c6428547f47a5737bac30da1feff7c94335f65094ca77e2cebf                                                                                                 0.1s
 => => extracting sha256:01872fc92c6cf715d78171a1b715efc05c9b103364c22cf4649e1d44fe2245bf                                                                                                 0.0s
 => => extracting sha256:875fef68e8ab2a5b953f2425137b92b5c8091fbe79604aed01793184e8efbb65                                                                                                 5.1s
 => => extracting sha256:77ac76ad90fed421c2cb5a03fbf6486e4e8a83390168a7187fde99181b54c5f6                                                                                                 0.0s
 => [internal] load build context                                                                                                                                                         0.4s
 => => transferring context: 143.39kB                                                                                                                                                     0.0s
 => [2/5] WORKDIR /src/app                                                                                                                                                                0.3s
 => [3/5] COPY go.mod go.sum ./                                                                                                                                                           0.4s
 => [4/5] RUN go mod download                                                                                                                                                             4.3s
 => [5/5] COPY . .                                                                                                                                                                        0.5s
 => exporting to image                                                                                                                                                                    2.8s
 => => exporting layers                                                                                                                                                                   2.7s
 => => writing image sha256:e6b98742660eabebec2fdb7bc491c4eff9bd744e6c1789db8381a0f24fa803d8                                                                                              0.0s
 => => naming to docker.io/library/golang_golang                                                                                                                                          0.0s
WARNING: Image for service golang was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Pulling redis (redis:6.0-rc-alpine)...
6.0-rc-alpine: Pulling from library/redis
cbdbe7a5bc2a: Pull complete
dc0373118a0d: Pull complete
cfd369fe6256: Pull complete
09a935bf1649: Pull complete
23985a6095ec: Pull complete
561cada643a7: Pull complete
Digest: sha256:ff868fb1ff9c8b42a23ba1a1a43c5c13a18ba737e1234321d42c55d924e4a057
Status: Downloaded newer image for redis:6.0-rc-alpine
Creating redis-in-action-golang ... done
Creating redis-in-action-redis  ... done
@rilma ➜ /workspaces/redis-in-action/golang (master) $ docker container ls
CONTAINER ID   IMAGE                 COMMAND                  CREATED         STATUS         PORTS                                       NAMES
1134cee2cdaa   redis:6.0-rc-alpine   "docker-entrypoint.s…"   2 minutes ago   Up 2 minutes   0.0.0.0:6379->6379/tcp, :::6379->6379/tcp   redis-in-action-redis
b267b39b0199   golang_golang         "/bin/sh"                2 minutes ago   Up 2 minutes                                               redis-in-action-golang
@rilma ➜ /workspaces/redis-in-action/golang (master) $ docker exec -it redis-in-action-golang go test ./Chapter0*/redisConn_test.go -v
named files must all be in one directory; have ./Chapter01/ and ./Chapter02/
@rilma ➜ /workspaces/redis-in-action/golang (master) $ docker exec -it redis-in-action-golang go test ./Chapter01/redisConn_test.go -v
=== RUN   Test
2023/04/04 20:29:11 Connect to redis client failed, err: dial tcp 127.0.0.1:6379: connect: connection refused
FAIL    command-line-arguments  0.006s
FAIL
@rilma ➜ /workspaces/redis-in-action/golang (master) $ docker exec -it redis-in-action-golang go test ./Chapter02/redisConn_test.go -v
=== RUN   TestLoginCookies
2023/04/04 20:29:21 Connect to redis client failed, err: dial tcp 127.0.0.1:6379: connect: connection refused
FAIL    command-line-arguments  0.004s
FAIL
@rilma ➜ /workspaces/redis-in-action/golang (master) $ docker exec -it redis-in-action-golang go test ./Chapter03/redisConn_test.go -v
=== RUN   TestLoginCookies
2023/04/04 20:29:29 Connect to redis client failed, err: dial tcp 127.0.0.1:6379: connect: connection refused
FAIL    command-line-arguments  0.003s
FAIL
@rilma ➜ /workspaces/redis-in-action/golang (master) $ docker exec -it redis-in-action-golang go test ./Chapter04/redisConn_test.go -v
=== RUN   Test
2023/04/04 20:29:37 Connect to redis client failed, err: dial tcp 127.0.0.1:6379: connect: connection refused
FAIL    command-line-arguments  0.003s
FAIL
@rilma ➜ /workspaces/redis-in-action/golang (master) $ docker exec -it redis-in-action-golang go test ./Chapter05/redisConn_test.go -v
=== RUN   Test
2023/04/04 20:29:49 Connect to redis client failed, err: dial tcp 127.0.0.1:6379: connect: connection refused
FAIL    command-line-arguments  0.009s
FAIL
@rilma ➜ /workspaces/redis-in-action/golang (master) $ docker exec -it redis-in-action-golang go test ./Chapter06/redisConn_test.go -v
=== RUN   Test
2023/04/04 20:30:06 Connect to redis client failed, err: dial tcp 127.0.0.1:6379: connect: connection refused
FAIL    command-line-arguments  0.004s
FAIL
@rilma ➜ /workspaces/redis-in-action/golang (master) $ docker exec -it redis-in-action-golang go test ./Chapter07/redisConn_test.go -v
=== RUN   Test
2023/04/04 20:30:17 Connect to redis client failed, err: dial tcp 127.0.0.1:6379: connect: connection refused
FAIL    command-line-arguments  0.004s
FAIL
@rilma ➜ /workspaces/redis-in-action/golang (master) $ docker exec -it redis-in-action-golang go test ./Chapter08/redisConn_test.go -v
=== RUN   Test
2023/04/04 20:30:23 Connect to redis client failed, err: dial tcp 127.0.0.1:6379: connect: connection refused
FAIL    command-line-arguments  0.003s
FAIL
@rilma ➜ /workspaces/redis-in-action/golang (master) $

A bug in Golang code

position: golang/Chapter02/model/client.go UpdateToken method

line35 r.Conn.HSet("viewed:"+token, item, timestamp)
it should be r.Conn.ZAdd(...)

a bug in index_ad

Did you really test it?
I got this bug: redis.exceptions.ResponseError: Command # 3 (ZADD idx:indexed 0 1) of pipeline caused error: WRONGTYPE Operation against a key holding the wrong kind of value
on index_ad function. pipeline is broken

questions about listing 2.9

# listing 2.9
def update_token(conn, token, user, item=None):
    timestamp = time.time()
    conn.hset('login:', token, user)
    conn.zadd('recent:', token, timestamp)
    if item:
        conn.zadd('viewed:' + token, item, timestamp)
        conn.zremrangebyrank('viewed:' + token, 0, -26)
        conn.zincrby('viewed:', item, -1)

conn.zincrby('viewed:', item, -1) means more views will lead to a lower score.

# listing 2.10
def rescale_viewed(conn):
    while not QUIT:
        conn.zremrangebyrank('viewed:', 0, -20001)
        conn.zinterstore('viewed:', {'viewed:': .5})
        time.sleep(300)

conn.zremrangebyrank('viewed:', 0, -20001) will keep the last 20000 items,
The order is from small to big, so items with few views will get a high score and be kept.
Guess conn.zincrby('viewed:', item, 1) or conn.zremrangebyrank('viewed:', 20001, -1) will work.

Maven Central deprecated http access causing 501 when gradle tries to fetch dependencies

Hi 👋

When I try to run any chapter I get:

% ./gradlew -Pchapter=1 run
:compileJava

FAILURE: Build failed with an exception.

* What went wrong:
Could not resolve all dependencies for configuration ':compile'.
> Could not resolve redis.clients:jedis:2.1.0.
  Required by:
      :redis-in-action:1.1
   > Could not HEAD 'http://repo1.maven.org/maven2/redis/clients/jedis/2.1.0/jedis-2.1.0.pom'. Received status code 501 from server: HTTPS Required
> Could not resolve org.javatuples:javatuples:1.2.
  Required by:
      :redis-in-action:1.1
   > Could not HEAD 'http://repo1.maven.org/maven2/org/javatuples/javatuples/1.2/javatuples-1.2.pom'. Received status code 501 from server: HTTPS Required
> Could not resolve com.google.code.gson:gson:2.2.2.
  Required by:
      :redis-in-action:1.1
   > Could not HEAD 'http://repo1.maven.org/maven2/com/google/code/gson/gson/2.2.2/gson-2.2.2.pom'. Received status code 501 from server: HTTPS Required

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.

BUILD FAILED

Total time: 2.454 secs

When I try to access one of the links from the error messages I get:

501 HTTPS Required. 
Use https://repo1.maven.org/maven2/
More information at https://links.sonatype.com/central/501-https-required

Turns out that Maven Central disabled http access on 15th of January 2020:
https://blog.sonatype.com/central-repository-moving-to-https

The solution is to upgrade gradle to a more recent verision. This is the oldest version of gradle which uses https by default:
https://docs.gradle.org/2.1/release-notes.html#use-of-https-for-mavencentral()-and-jcenter()-dependency-repositories

As an alternative we can add following line to build.gradle:

maven { url "https://repo.maven.apache.org/maven2" }

I will open a PR with the gradle update.

Error in chapter 4?

def purchase_item(conn, buyerid, itemid, sellerid, lprice):
    buyer = "users:%s"%buyerid
    seller = "users:%s"%sellerid
    item = "%s.%s"%(itemid, sellerid)
    inventory = "inventory:%s"%buyerid
    end = time.time() + 10
    pipe = conn.pipeline()


    while time.time() < end:
        try:
            pipe.watch("market:", buyer)                #A


            price = pipe.zscore("market:", item)        #B
            funds = int(pipe.hget(buyer, "funds"))      #B
            if price != lprice or price > funds:        #B
                pipe.unwatch()                          #B
                return None


            pipe.multi()                                #C
            pipe.hincrby(seller, "funds", int(price))   #C
            pipe.hincrby(buyer, "funds", int(-price))   #C
            pipe.sadd(inventory, itemid)                #C
            pipe.zrem("market:", item)                  #C
            pipe.execute()                              #C
            return True
        except redis.exceptions.WatchError:             #D
            pass                                        #D


    return False

You use a pipeline, but at #B you do a check if the price is not equal to lprice, but isn't the price unavailable at that time, because of the pipelining? I didn't know you could read values during pipelining and retrieve the value immedialtely.

hello ,I have a suggestion

Now it is all python and Java demo, now go language is so popular, can you make a tutorial on redis using GO language

hava a git question

Ask me how to split the python catalog out of a separate git project
I can only understand the code of pyhton

Error in chapter2 ?

For both of two lines:
https://github.com/josiahcarlson/redis-in-action/blob/master/python/ch02_listing_source.py#L166
https://github.com/josiahcarlson/redis-in-action/blob/master/python/ch02_listing_source.py#L174

conn.zincrby('viewed:', item, -1) 

It means more views, more negative score.

conn.zremrangebyrank('viewed:', 0, -20001) 

the popular members(with high negative score) will be removed.

see my test:

127.0.0.1:6379> ZADD myset -20 a
(integer) 1
127.0.0.1:6379> ZADD myset -15 b
(integer) 1
127.0.0.1:6379> ZADD myset -10 c
(integer) 1
127.0.0.1:6379> ZADD myset -5 d
(integer) 1
127.0.0.1:6379> ZADD myset -1 e
(integer) 1
127.0.0.1:6379> ZREMRANGEBYRANK myset 0 -3
(integer) 3
127.0.0.1:6379> ZRANGE myset 0 -1 WITHSCORES
1) "d"
2) "-5"
3) "e"
4) "-1"

golang/redisConn testing has a bug

Turns out that there is a bug in the testing script of redisConn subdirectory. May require an analysis for a fix

@rilma ➜ /workspaces/redis-in-action/golang (master) $ go test redisConn/redisConn_test.go -v
# command-line-arguments [command-line-arguments.test]
redisConn/redisConn_test.go:10:18: undefined: CheckVersion
redisConn/redisConn_test.go:14:17: undefined: CheckVersion
redisConn/redisConn_test.go:21:10: undefined: ConnectRedis
redisConn/redisConn_test.go:22:12: undefined: NewClient
FAIL    command-line-arguments [build failed]
FAIL
@rilma ➜ /workspaces/redis-in-action/golang (master) $ 

Chapter02 Golang CacheRequest bug

Here it should not be possible to cache and then call the callback function directly, so the correct way to write it would be:

if !r.CanCache(request)

func (r *Client) CacheRequest(request string, callback func(string) string) string {
	if !r.CanCache(request) {
		return callback(request)
	}

	pageKey := "cache:" + hashRequest(request)
	content := r.Conn.Get(pageKey).Val()

	if content == "" {
		content = callback(request)
		r.Conn.Set(pageKey, content, 300*time.Second)
	}
	return content
}

Zincrby last two parameters order misplaced

There is a typo In chapter 5’s python code, function update_stats
pipe.zincrby(destination, value, 'sum')
Should be
pipe.zincrby(destination, 'sum', value)
The same as the other 2 zincr code.

expire duplication & race condition (exercise from chapter 3)

Hi there, I have some questions :)

  1. What's point to duplicate expire in chapter 3?

pipeline.expire('voted:' + article_id, int(posted-cutoff)) #B

(already have in post_article from chapter 1)

conn.expire(voted, ONE_WEEK_IN_SECONDS) #B

  1. Exercise from chapter 3: Removing of race conditions

How to actually spot this behavior? I always get expected results with article_vote from chapter1 (running concurrently:
https://github.com/egoarka/redis-in-action-typescript/blob/77dadb7214d4a8bec07c6d8d7217d1070c93939e/src/chapter3/ch3.spot.ts#L24)

execution logs: https://gist.github.com/egoarka/565bc9ade7afdc4158f438b35b5fd8d8

  1. Missed tab?
    conn.zincrby('viewed:', item, -1)

questions about errata for listing 2.10

There is a bug on the third line of the rescale_viewed() function definition.

The full function definition reads:

def rescale_viewed(conn):
    while not QUIT:
        conn.zremrangebyrank('viewed:', 20000, -1)
        conn.zinterstore('viewed:', {'viewed:': .5})
        time.sleep(300)
The third line, updated inline, should read:

def rescale_viewed(conn):
    while not QUIT:
        conn.zremrangebyrank('viewed:', 0, -20001)
        conn.zinterstore('viewed:', {'viewed:': .5})
        time.sleep(300)

the old version code remove the items not in the top 20000,
the new version code keep the items in the last 20000.

I can not understand why you change the code, can you tell me, thanks

questions about errata for listing 6.9

In errata for listing 6.9 (http://www.manning.com/carlson/excerpt_errata.html), it's said "Code bug: there is an extra pipe.watch(buyer) call that is unnecessary, which can be removed."

  1. If this can be removed, I think the try ... except redis.exceptions.WatchError (https://github.com/josiahcarlson/redis-in-action/blob/master/python/ch06_listing_source.py#L203) and pipe.unwatch() (https://github.com/josiahcarlson/redis-in-action/blob/master/python/ch06_listing_source.py#L194) can be removed, too. Is that correct?

  2. Could you explain more about why this can be removed?
    My understanding is there's an assumption that purchasing items is the only way to reduce the buyer's funds. If this assumption stands, the market-level lock already makes sure no multiple purchases occur at the same time and therefore it's not necessary to watch the buyer info.

  3. If fine-grained lock is used, it's still necessary to watch this, isn't it?

lock usage in Chapter 8 example

In Chapter 8 Twitter clone example, I think after acquiring the lock, if something goes wrong, the lock should be released before function returning.

For example, in listing 8.1, if conn.hget('users:', llogin), the lock should be released before return None. What do you think about it?
Perhaps it's better to use a With Statement context manager so it's not necessary to explicitly release the lock in multiple places.

def create_user(conn, login, name):
    llogin = login.lower()
    lock = acquire_lock_with_timeout(conn, 'user:' + llogin, 1) #A
    if not lock:                            #B
        return None                         #B

    if conn.hget('users:', llogin):         #C
        return None                         #C

    id = conn.incr('user:id:')              #D
    .......

Java code for chapter 8 is broken

----- testRefillTimeline -----
Exception in thread "main" java.lang.IndexOutOfBoundsException: toIndex = 5
        at java.util.ArrayList.subListRangeCheck(ArrayList.java:1014)
        at java.util.ArrayList.subList(ArrayList.java:1006)
        at Chapter08.refillTimeline(Chapter08.java:460)
        at Chapter08.refillTimeline(Chapter08.java:431)
        at Chapter08.testRefillTimeline(Chapter08.java:130)
        at Chapter08.run(Chapter08.java:36)
        at Chapter08.main(Chapter08.java:17)
:run FAILED

Failing Go testing files for Chapters 2 and 5

After making a fork of this repository and checkout out from master branch, I find that testing scripts in Go corresponding to Chapters 2 and 5 are failing in Github Codespaces VM (Ubuntu 18.04.1; Docker Compose 1.29.2; Go 1.20.2). I have enabled Redis according to instructions as found at redis-in-action/golang

@rilma ➜ /workspaces/redis-in-action/golang (master) $ docker-compose up -d
Creating network "golang_my-test" with driver "bridge"
Building golang
[+] Building 21.2s (11/11) FINISHED                                                                                                                                                            
 => [internal] load build definition from Dockerfile                                                                                                                                      0.4s
 => => transferring dockerfile: 224B                                                                                                                                                      0.0s
 => [internal] load .dockerignore                                                                                                                                                         0.5s
 => => transferring context: 2B                                                                                                                                                           0.0s
 => [internal] load metadata for docker.io/library/golang:1.14-alpine3.11                                                                                                                 1.6s
 => [auth] library/golang:pull token for registry-1.docker.io                                                                                                                             0.0s
 => [1/5] FROM docker.io/library/golang:1.14-alpine3.11@sha256:4f1c80d88c5879067f063770c774a8ffd4de47b684333cdbe9a4ce661931b9b8                                                          10.1s
 => => resolve docker.io/library/golang:1.14-alpine3.11@sha256:4f1c80d88c5879067f063770c774a8ffd4de47b684333cdbe9a4ce661931b9b8                                                           0.3s
 => => sha256:7ae5d4ed80128862597e54747828838e317dacf76670e58dbd9294cc268eb21b 1.36kB / 1.36kB                                                                                            0.0s
 => => sha256:d8bc21febf89d0a2f2937b0e7f35f58d5570a3cdaaa283580551cc565558efab 4.62kB / 4.62kB                                                                                            0.0s
 => => sha256:4f1c80d88c5879067f063770c774a8ffd4de47b684333cdbe9a4ce661931b9b8 1.65kB / 1.65kB                                                                                            0.0s
 => => sha256:01872fc92c6cf715d78171a1b715efc05c9b103364c22cf4649e1d44fe2245bf 153B / 153B                                                                                                0.3s
 => => sha256:780d39f1cd5d8c6428547f47a5737bac30da1feff7c94335f65094ca77e2cebf 299.55kB / 299.55kB                                                                                        0.5s
 => => sha256:0a6724ff3fcd51338afdfdc2b1d4ffd04569818e31efad957213d67c29b45101 2.81MB / 2.81MB                                                                                            0.4s
 => => sha256:875fef68e8ab2a5b953f2425137b92b5c8091fbe79604aed01793184e8efbb65 107.28MB / 107.28MB                                                                                        2.4s
 => => extracting sha256:0a6724ff3fcd51338afdfdc2b1d4ffd04569818e31efad957213d67c29b45101                                                                                                 0.1s
 => => sha256:77ac76ad90fed421c2cb5a03fbf6486e4e8a83390168a7187fde99181b54c5f6 126B / 126B                                                                                                0.7s
 => => extracting sha256:780d39f1cd5d8c6428547f47a5737bac30da1feff7c94335f65094ca77e2cebf                                                                                                 0.1s
 => => extracting sha256:01872fc92c6cf715d78171a1b715efc05c9b103364c22cf4649e1d44fe2245bf                                                                                                 0.0s
 => => extracting sha256:875fef68e8ab2a5b953f2425137b92b5c8091fbe79604aed01793184e8efbb65                                                                                                 5.1s
 => => extracting sha256:77ac76ad90fed421c2cb5a03fbf6486e4e8a83390168a7187fde99181b54c5f6                                                                                                 0.0s
 => [internal] load build context                                                                                                                                                         0.4s
 => => transferring context: 143.39kB                                                                                                                                                     0.0s
 => [2/5] WORKDIR /src/app                                                                                                                                                                0.3s
 => [3/5] COPY go.mod go.sum ./                                                                                                                                                           0.4s
 => [4/5] RUN go mod download                                                                                                                                                             4.3s
 => [5/5] COPY . .                                                                                                                                                                        0.5s
 => exporting to image                                                                                                                                                                    2.8s
 => => exporting layers                                                                                                                                                                   2.7s
 => => writing image sha256:e6b98742660eabebec2fdb7bc491c4eff9bd744e6c1789db8381a0f24fa803d8                                                                                              0.0s
 => => naming to docker.io/library/golang_golang                                                                                                                                          0.0s
WARNING: Image for service golang was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Pulling redis (redis:6.0-rc-alpine)...
6.0-rc-alpine: Pulling from library/redis
cbdbe7a5bc2a: Pull complete
dc0373118a0d: Pull complete
cfd369fe6256: Pull complete
09a935bf1649: Pull complete
23985a6095ec: Pull complete
561cada643a7: Pull complete
Digest: sha256:ff868fb1ff9c8b42a23ba1a1a43c5c13a18ba737e1234321d42c55d924e4a057
Status: Downloaded newer image for redis:6.0-rc-alpine
Creating redis-in-action-golang ... done
Creating redis-in-action-redis  ... done
@rilma ➜ /workspaces/redis-in-action/golang (master) $ 

Chapter 2's testing script is failing as shown below:

@rilma ➜ /workspaces/redis-in-action/golang (master) $ go test ./Chapter02/redisConn_test.go -v
=== RUN   TestLoginCookies
=== RUN   TestLoginCookies/Test_UpdateToken
    redisConn_test.go:21: We just logged-in/update token: 
         7673ad36-442e-4b3c-b285-300b828fad9b
    redisConn_test.go:22: For user:  username
        
    redisConn_test.go:23: 
        What username do we get when we look-up that token?
        
    redisConn_test.go:25: username:  username
    redisConn_test.go:28: Let's drop the maximum number of cookies to 0 to clean them out
        
    redisConn_test.go:29: We will start a thread to do the cleaning, while we stop it later
        
    redisConn_test.go:40: The current number of sessions still available is: 0
    redisConn_test.go:41: want get 1, actual get 0
=== RUN   TestLoginCookies/Test_shopping_cart_cookie
    redisConn_test.go:46: We'll refresh our session...
    redisConn_test.go:48: And add an item to the shopping cart
    redisConn_test.go:51: Our shopping cart currently has: map[itemY:3]
    redisConn_test.go:55: Let's clean out our sessions and carts
    redisConn_test.go:64: Our shopping cart now contains: map[]
=== RUN   TestLoginCookies/Test_cache_request
    redisConn_test.go:71: We are going to cache a simple request against http://test.com/?item=itemX
    redisConn_test.go:75: We got initial content:  content for http://test.com/?item=itemX
    redisConn_test.go:78: To test that we've cached the request, we'll pass a bad callback
    redisConn_test.go:80: We ended up getting the same response! content for http://test.com/?item=itemX
=== RUN   TestLoginCookies/Test_cache_row
    redisConn_test.go:88: First, let's schedule caching of itemX every 5 seconds
    redisConn_test.go:90: Our schedule looks like:
    redisConn_test.go:92: itemX 1.68064047e+09
    redisConn_test.go:95: We'll start a caching thread that will cache the data...
    redisConn_test.go:98: Our cached data looks like:
    redisConn_test.go:100: {"Id":"itemX","Data":"data to cache...","Cached":1680640470}
    redisConn_test.go:103: We'll check again in 5 seconds...
    redisConn_test.go:105: Notice that the data has changed...
    redisConn_test.go:107: {"Id":"itemX","Data":"data to cache...","Cached":1680640475}
    redisConn_test.go:111: Let's force un-caching
    redisConn_test.go:115: The cache was cleared? true
--- FAIL: TestLoginCookies (15.02s)
    --- FAIL: TestLoginCookies/Test_UpdateToken (3.00s)
    --- PASS: TestLoginCookies/Test_shopping_cart_cookie (3.00s)
    --- PASS: TestLoginCookies/Test_cache_request (0.00s)
    --- PASS: TestLoginCookies/Test_cache_row (9.01s)
FAIL
FAIL    command-line-arguments  15.023s
FAIL
@rilma ➜ /workspaces/redis-in-action/golang (master) $

Same situation with Chapter 5' script:

@rilma ➜ /workspaces/redis-in-action/golang (master) $ go test ./Chapter05/redisConn_test.go -v
=== RUN   Test
=== RUN   Test/Test_log_recent
    redisConn_test.go:22: Let's write a few logs to the recent log
    redisConn_test.go:27: The current recent message log has this many messages: 5
    redisConn_test.go:28: Those messages include:
    redisConn_test.go:30: 2023-04-04 20:37:44.981333735 +0000 UTC this is message 4
    redisConn_test.go:30: 2023-04-04 20:37:44.981222936 +0000 UTC this is message 3
    redisConn_test.go:30: 2023-04-04 20:37:44.981102737 +0000 UTC this is message 2
    redisConn_test.go:30: 2023-04-04 20:37:44.980939539 +0000 UTC this is message 1
    redisConn_test.go:30: 2023-04-04 20:37:44.980135048 +0000 UTC this is message 0
=== RUN   Test/Test_log_common
    redisConn_test.go:37: Let's write some items to the common log
    redisConn_test.go:44: The current number of common messages is: 5
    redisConn_test.go:45: Those common messages are:
    redisConn_test.go:47: {60 message-5}
    redisConn_test.go:47: {35 message-4}
    redisConn_test.go:47: {32 message-3}
    redisConn_test.go:47: {22 message-2}
    redisConn_test.go:47: {10 message-1}
=== RUN   Test/Test_counters
    redisConn_test.go:54: Let's update some counters for now and a little in the future
    redisConn_test.go:60: We have some per-second counters: 10
    redisConn_test.go:63: We have some per-5-second counters: 2
    redisConn_test.go:64: These counters include:
    redisConn_test.go:70: [1680640665 15]
    redisConn_test.go:70: [1680640670 20]
    redisConn_test.go:75: Let's clean out some counters by setting our sample count to 0
    redisConn_test.go:81: Did we clean out all of the counters? 0
=== RUN   Test/Test_stats
    redisConn_test.go:87: Let's add some data for our statistics!
    redisConn_test.go:92: We have some aggregate statistics: [zincrby stats:temp:example 1 count: 47 zincrby stats:temp:example 6 sum: 349 zincrby stats:temp:example 36 sumq: 2873]
    redisConn_test.go:94: Which we can also fetch manually:
    redisConn_test.go:96: min 5
    redisConn_test.go:96: max 12
    redisConn_test.go:96: count 47
    redisConn_test.go:96: sum 349
    redisConn_test.go:96: sumq 2873
    redisConn_test.go:96: average 7.425531914893617
    redisConn_test.go:96: stddev 2.4737287543395396
=== RUN   Test/Test_access_time
    redisConn_test.go:103: Let's calculate some access times...
    redisConn_test.go:109: The slowest access times are:
    redisConn_test.go:112: req-6
    redisConn_test.go:112: req-2
    redisConn_test.go:112: req-8
    redisConn_test.go:112: req-7
    redisConn_test.go:112: req-3
    redisConn_test.go:112: req-1
    redisConn_test.go:112: req-9
    redisConn_test.go:112: req-5
    redisConn_test.go:112: req-4
    redisConn_test.go:112: req-0
=== RUN   Test/Test_is_under_maintenance
    redisConn_test.go:119: Are we under maintenance (we shouldn't be)? false
    redisConn_test.go:121: We cached this, so it should be the same: false
    redisConn_test.go:123: But after a sleep, it should change: true
    redisConn_test.go:124: Cleaning up...
    redisConn_test.go:127: Should be False again: false
=== RUN   Test/Test_ip_lookup
    redisConn_test.go:132: Importing IP addresses to Redis... (this may take a while)
2023/04/04 20:37:58 open file fault, filename: ../Chapter05/GeoLite2-City-CSV_20200121/GeoLite2-City-Blocks-IPv4.csv, err: open ../Chapter05/GeoLite2-City-CSV_20200121/GeoLite2-City-Blocks-IPv4.csv: no such file or directory
FAIL    command-line-arguments  13.692s
FAIL
@rilma ➜ /workspaces/redis-in-action/golang (master) $ 

The aforementioned scripts may require a fix. May @YangKian be suitable for providing a fix?

Variable name collision in ``FollowFilter()`` function.

Hi, I notice that FollowFilter() function in ch08_listing_source.py file is defined two names variables, and the later names is overlap the names argument, which makes the names set always being empty, it seems like a bug:

def FollowFilter(names):  # The 'names' argument
    names = set()         # A 'names' set, which overlap the 'names' argument
    for name in names:                                                                                  
        names.add('@' + name.lower().lstrip('@'))

    def check(status):
        message_words = set(status['message'].lower().split())
        message_words.add('@' + status['login'].lower()) 

        return message_words & names 
    return check

Redis service is never up in python/docker-compose.yml

After making a fork of this repository and checkout out from master branch, I find the following error in Github Codespaces VM (Ubuntu 18.04.1; Docker Compose 1.29.2)

@rilma ➜ /workspaces/redis-in-action (task/python-qa) $ cd python/
@rilma ➜ /workspaces/redis-in-action/python (task/python-qa) $ docker-compose up -d
Building python
[+] Building 1.7s (10/10) FINISHED                                                                                                                                              
 => [internal] load build definition from Dockerfile                                                                                                                       0.2s
 => => transferring dockerfile: 34B                                                                                                                                        0.0s
 => [internal] load .dockerignore                                                                                                                                          0.3s
 => => transferring context: 2B                                                                                                                                            0.0s
 => [internal] load metadata for docker.io/library/python:3.6                                                                                                              1.0s
 => [auth] library/python:pull token for registry-1.docker.io                                                                                                              0.0s
 => [1/4] FROM docker.io/library/python:3.6@sha256:f8652afaf88c25f0d22354d547d892591067aa4026a7fa9a6819df9f300af6fc                                                        0.0s
 => [internal] load build context                                                                                                                                          0.1s
 => => transferring context: 37B                                                                                                                                           0.0s
 => CACHED [2/4] WORKDIR /usr/src/app                                                                                                                                      0.0s
 => CACHED [3/4] COPY requirements.txt ./                                                                                                                                  0.0s
 => CACHED [4/4] RUN pip install --no-cache-dir -r requirements.txt                                                                                                        0.0s
 => exporting to image                                                                                                                                                     0.2s
 => => exporting layers                                                                                                                                                    0.0s
 => => writing image sha256:2130d016f3b65f51f29038dd31108061b43ec6a19c212e86967e74ab04e0eaaa                                                                               0.0s
 => => naming to docker.io/library/python:redis-in-action                                                                                                                  0.0s
WARNING: Image for service python was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Pulling redis (redis:latest)...
latest: Pulling from library/redis
f1f26f570256: Pull complete
8a1809b0503d: Pull complete
d792b14d05f9: Pull complete
ad29eaf93bf6: Pull complete
7cda84ccdb33: Pull complete
95f837a5984d: Pull complete
Digest: sha256:7b83a0167532d4320a87246a815a134e19e31504d85e8e55f0bb5bb9edf70448
Status: Downloaded newer image for redis:latest
Creating redis-in-action-python ... 
Creating redis-in-action-redis  ... 
Creating redis-in-action-python ... done
ERROR: for redis-in-action-redis  "host" network_mode is incompatible with port_bindings

ERROR: for redis  "host" network_mode is incompatible with port_bindings
Traceback (most recent call last):
  File "docker-compose", line 3, in <module>
  File "compose/cli/main.py", line 81, in main
  File "compose/cli/main.py", line 203, in perform_command
  File "compose/metrics/decorator.py", line 18, in wrapper
  File "compose/cli/main.py", line 1186, in up
  File "compose/cli/main.py", line 1182, in up
  File "compose/project.py", line 702, in up
  File "compose/parallel.py", line 108, in parallel_execute
  File "compose/parallel.py", line 206, in producer
  File "compose/project.py", line 688, in do
  File "compose/service.py", line 564, in execute_convergence_plan
  File "compose/service.py", line 480, in _execute_convergence_create
  File "compose/parallel.py", line 108, in parallel_execute
  File "compose/parallel.py", line 206, in producer
  File "compose/service.py", line 478, in <lambda>
  File "compose/service.py", line 457, in create_and_start
  File "compose/service.py", line 334, in create_container
  File "compose/service.py", line 941, in _get_container_create_options
  File "compose/service.py", line 1073, in _get_container_host_config
  File "docker/api/container.py", line 598, in create_host_config
  File "docker/types/containers.py", line 339, in __init__
docker.errors.InvalidArgument: "host" network_mode is incompatible with port_bindings
[8368] Failed to execute script docker-compose
@rilma ➜ /workspaces/redis-in-action/python (task/python-qa) $

The corresponding Python and Redis Docker images are downloaded, but only Python service is up. Redis service is down. The Docker Compose file may require a fix.

@rilma ➜ /workspaces/redis-in-action/python (task/python-qa) $ docker image ls
REPOSITORY   TAG               IMAGE ID       CREATED        SIZE
python       redis-in-action   2130d016f3b6   25 hours ago   911MB
redis        latest            31f08b90668e   10 days ago    117MB
@rilma ➜ /workspaces/redis-in-action/python (task/python-qa) $ docker container ls
CONTAINER ID   IMAGE                    COMMAND     CREATED              STATUS              PORTS     NAMES
d7067126edfc   python:redis-in-action   "python3"   About a minute ago   Up About a minute             redis-in-action-python
@rilma ➜ /workspaces/redis-in-action/python (task/python-qa) $

[2.4. Database row caching](https://livebook.manning.com/book/redis-in-action/chapter-2/59)

def schedule_row_cache(conn, row_id, delay): conn.zadd('delay:', row_id, delay) #A conn.zadd('schedule:', row_id, time.time()) #B

`def cache_rows(conn):
while not QUIT:
next = conn.zrange('schedule:', 0, 0, withscores=True) #A
now = time.time()
if not next or next[0][1] > now:
time.sleep(.05) #B
continue

    row_id = next[0][0]
    delay = conn.zscore('delay:', row_id)                   #C
    if delay <= 0:
        conn.zrem('delay:', row_id)                         #D
        conn.zrem('schedule:', row_id)                      #D
        conn.delete('inv:' + row_id)                        #D
        continue

    row = Inventory.get(row_id)                             #E
    conn.zadd('schedule:', row_id, now + delay)             #F
    conn.set('inv:' + row_id, json.dumps(row.to_dict()))    #F`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.