Giter Club home page Giter Club logo

rchowell.github.io's Introduction

rchowell.github.io's People

Contributors

rchowell avatar

Watchers

 avatar  avatar  avatar

rchowell.github.io's Issues

Gumby Project — Is this ... contrived? 🧐

Gumby Project

Mountain Project for Plastic

Problem

At my college climbing gym we could only set six to ten established routes per week. This sounds like a lot, but these routes had to be spread amongst all difficulties which meant that, for your desired difficulty, you would have ~1-2 new routes per week. We would frequently climb 4-6 times per week, so having enough routes was often a limiting factor. In response, we created new routes by combining holds from pre-established routes. This is very common in climbing gyms, but we needed some way to document them because sometimes you were in class while your friends were making new routes! This was before apps like Stokt existed, so Gumby Project was born.

Development

The app was built the weekend of the infamous Polar Vortex. This was the first time Notre Dame had cancelled class in over a decade; and I treated this free day as a mini-hackathon.

The app's architecture is based upon the model-view-presenter pattern because the Firestore SDK's were all in their infancy and the streaming oriented architectures like BLoC were just surfacing. I'm using Firebase for data and Firebase storage for images.

Screenshots

SerDe for GTP with Protobuf and gRPC Server Definition

Full Code

About

Somewhat of a follow-up to GTP Protobuf Definition - 11
because I posted the full protobuf definition, but didn't want to publish the serializing and deserializing extensions. The extensions are written in the same Go package as the generated code so that I can "extend" the generated structs with receiver functions. I've included a small Makefile.

make gen
make clean

I had been working on a project/exercise, that I'll open source some of, which involved communicating with Go (Baduk)
engines. Go (Baduk) engines communicate with Go Text Protocol which is sent as plaintext to the engine's stdin or some network interface. I did not want to use plaintext in my program, so I chose to use protobuf for GTP and Go/Baduk structures and wrap Go engines with a gRPC service.

My project's Go (Baduk) central server operates with the generated Go (lang) code and communicates with the AIs/Engines e.g. GnuGo and KataGo with the generated gRPC stub because the engines are wrapped by the generated server. This allows for communicating between the central server and various Go (Baduk) AIs/Engines using structured input and output as well as gives me the benefits of gRPC and the generated stub/client.

GTP SerDe

Parsing / Deserialization

The gRPC wrapper server receives output from the engine as plaintext and parses these a response objects to return
to the caller.

// Parsing a generic response
func parseResponse(i, command string, minElements int) (*Response, error) {
    clean := strings.ReplaceAll(controlChars.ReplaceAllString(i, ""), HT, SPACE)
    // tokens ~technically~ aren't split on a SPACE
    tokens := strings.Split(clean, SPACE)
    l := len(tokens)
    id, err := parseCommandId(tokens[0])
    if err != nil {
        return nil, err
    }
    // handle empty responses
    if l < 2 {
        return &Response{Id: int32(id)}, nil
    }
    if l-1 < minElements {
        return nil, fmt.Errorf("%s response requires %d response elements", command, minElements)
    }
    succeeded, err := isSuccessful(clean)
    if err != nil {
        return nil, err
    }
    if !succeeded {
        return &Response{
            Id:    int32(id),
            Error: errors.New(strings.Join(tokens[1:l], SPACE)),
        }, nil
    }
    return &Response{
        Id:       int32(id),
        Elements: tokens[1:l],
    }, nil
}

// Parsing a particular and more complicated response
func ParseGenMoveResponse(i string) (*GenMoveResponse, error) {
    res, err := parseResponse(i, ProtocolVersion, 1)
    if err != nil {
        return nil, err
    }
    t := strings.ToUpper(res.Elements[0])
    switch t {
        case Move_PASS.String():
            return &GenMoveResponse{Id: res.Id, Move: &Move{Type: Move_PASS}}, nil
        case Move_RESIGN.String():
            return &GenMoveResponse{Id: res.Id, Move: &Move{Type: Move_RESIGN}}, nil
    }
    v, err := ParseVertex(t)
    if err != nil {
        return nil, err
    }
    return &GenMoveResponse{
        Id: res.Id,
        Move: &Move{
            Type:   Move_PLACE,
            Vertex: v,
        },
    }, nil
}

Encoding / Serialization

The gRPC wrapper server receives the callers request objects and calls "ToCommand" to get the plaintext GTP command to
send to the Engine's input.

// Example for `Play`
func (x *PlayRequest) ToCommand() string {
    if x.Id == 0 {
        return fmt.Sprintf("%s %s%s", Play, x.Move.GTPString(), LF)
    }
    return fmt.Sprintf("%d %s %s%s", x.Id, Play, x.Move.GTPString(), LF)
}

func (x *Move) GTPString() string {
    switch x.Type {
    case Move_PLACE:
        return fmt.Sprintf("%s %s%d", x.Color, columnToLetter(x.Vertex.Column), x.Vertex.Row+1)
    case Move_RESIGN:
        return fmt.Sprintf("%s %s", x.Color, Move_RESIGN)
    default:
        return fmt.Sprintf("%s %s", x.Color, Move_PASS)
    }
}

GTP gRPC Server

// Example Request Response Objects
message SetKomiRequest {
  int32 id = 1;
  float komi = 2;
}

message SetKomiResponse {
  int32 id = 1;
  Error error = 2;
}

service GTP {

  // Version of the GTP Protocol
  rpc ProtocolVersion (ProtocolVersionRequest) returns (ProtocolVersionResponse) {}

  // E.g. “GNU Go”, “GoLois”, “Many Faces of Go”. The name does not include any version information. Use `version`.
  rpc Name (NameRequest) returns (NameResponse) {}

  // E.g. “3.1.33”, “10.5”. Engines without a sense of version number return the empty string.
  rpc Version (VersionRequest) returns (VersionResponse) {}

  // Returns “true” if the command is known by the engine, “false” otherwise
  rpc KnownCommand (KnownCommandRequest) returns (KnownCommandResponse) {}

  // Lists all known commands, including required ones and private extensions.
  rpc ListCommands (ListCommandsRequest) returns (ListCommandsResponse) {}

  // The session is terminated and the connection is closed.
  rpc Quit (QuitRequest) returns (QuitResponse) {}

  // Changes the board size. If the engine cannot handle the new size, fails with the error message ”unacceptable size”.
  rpc BoardSize (BoardSizeRequest) returns (BoardSizeResponse) {}

  // Clears the board, captured stones are reset, and move history is reset
  rpc ClearBoard (ClearBoardRequest) returns (ClearBoardResponse) {}

  // Changes the Komi
  rpc Komi (KomiRequest) returns (KomiResponse) {}

  // Plays the given move
  rpc Play (PlayRequest) returns (PlayResponse) {}

  // Asks the engine to generate a move, it will play it, and will return what was played
  rpc GenMove (GenMoveRequest) returns (GenMoveResponse) {}

  // The board and captured stones are reset to the previous move
  rpc Undo(UndoRequest) returns(UndoResponse) {}

}

Moiré Patterns in CMYK Halftone Printing

What is a moiré pattern?

Moiré patterns are large-scale interference patterns that can be produced when an opaque ruled pattern with transparent gaps is overlaid on another similar pattern. For the moiré interference pattern to appear, the two patterns must not be completely identical, but rather e.g. displaced, rotated or have slightly different pitch. Wiki

What is CMYK Printing?

The CMYK color model is a subtractive color model, based on the CMY color model, used in color printing ... With CMYK printing, halftoning (also called screening) allows for less than full saturation of the primary colors; tiny dots of each primary color are printed in a pattern small enough that humans perceive a solid color ... To improve print quality and reduce moiré patterns, the screen for each color is set at a different angle. Wiki

How are they related?
A CMYK print is made in four layers. Each layer is halftoned with dots. For a given color, the image is filtered and groups of pixels are mapped to a dot with a radius which is representative of the group's intensity. Each dot's center is place on regular grid intervals. It's now obvious why moiré patterns are related. Each layer is an opaque ruled pattern with transparent gaps overlaid on other similar patterns. To reduce the adverse effects of these patterns such as color distortion and ink build-up, each layer is set at a different angle.

Variations of Patterns
Here's a visualization of the patterns that can form as you adjust screen angles. You may want to zoom 😄

package main

import (
	"flag"
	"image"
	"image/color"
	"image/gif"
	"math"
	"os"
)

const (
	rate  = 8
	size  = 128
	delay = 10
)

var cmyk = []color.Color{
	color.White, // background
	color.CMYK{C: math.MaxUint8},
	color.CMYK{M: math.MaxUint8},
	color.CMYK{Y: math.MaxUint8},
	color.CMYK{K: math.MaxUint8},
}

var frames = rate * 360

// https://i.imgur.com/k86QK.png
var outerSquareSize = int(math.Ceil(size * 2 / math.Sqrt2))
var radius = size / 2
var center = (outerSquareSize / 2) + 1

func main() {
	gaps := flag.Int("gaps", 4, "gap size between pixels")
	flag.Parse()

	anim := gif.GIF{LoopCount: frames}
	for frame := 0; frame < frames; frame++ {
		anim.Delay = append(anim.Delay, delay)
		anim.Image = append(anim.Image, generateImage(frame, *gaps))
	}
	gif.EncodeAll(os.Stdout, &anim)
}

func generateImage(frame, gaps int) *image.Paletted {
	bounds := image.Rect(0, 0, outerSquareSize, outerSquareSize)
	img := image.NewPaletted(bounds, cmyk)
	for c := 1; c <= 4; c++ {
		a := float64(c*frame) * math.Pi / 180 / rate // deg to rad
		for x := -radius; x < radius; x += gaps {
			fx := float64(x)
			for y := -radius; y < radius; y += gaps {
				fy := float64(y)
				rx := int(fx*math.Cos(a) - fy*math.Sin(a))
				ry := int(fx*math.Sin(a) + fy*math.Cos(a))
				img.SetColorIndex(rx+center, ry+center, uint8(c))
			}
		}
	}
	return img
}

GTP Protobuf Definition

I'm starting a project which requires communicating with Go engines such as GnuGo and Katago. Engines communicate with a protocol called "Go Text Protocol". I'd prefer to use something more structured, so I've written protobuf definitions for a Go engine server and types. From here, it's easy to wrap the engines with a simple gRPC -> GTP translator.

/**
 * Author R. C. Howell 2020
 *
 * This package contains protobuf definitions for the Go Text Protocol 2.0 [1]. It is for use with gRPC
 * servers/controllers to communicate between applications and Go engines. You will find types have been adapted to
 * impose more structure on the protocol.
 *
 * [1] http://www.lysator.liu.se/~gunnar/gtp/
 */
syntax = "proto3";
package gtp;

option go_package = "github.com/rchowell/dandan/gtp";

message Vertex {
  int32 row = 1;
  string column = 2;
}

enum Color {
  EMPTY = 0;
  BLACK = 1;
  WHITE = 2;
}

message Move {
  Type type = 1;
  Color color = 2;
  Vertex vertex = 3;

  enum Type {
    PASS = 0;
    RESIGN = 1;
    PLACE = 2;
  }
}

message Game {
  int32 size = 1;
  float komi = 2;
  repeated Move moves = 3;
}

message Error {
  string message = 2;
}

message ProtocolVersionRequest {
  int32 id = 1;
}

message ProtocolVersionResponse {
  int32 id = 1;
  int32 version = 2;
}

message NameRequest {
  int32 id = 1;
}

message NameResponse {
  int32 id = 1;
  string name = 2;
}

message VersionRequest {
  int32 id = 1;
}

message VersionResponse {
  int32 id = 1;
  string version = 2;
}

message KnownCommandRequest {
  int32 id = 1;
  string command = 2;
}

message KnownCommandResponse {
  int32 id = 1;
  bool known = 2;
}

message ListCommandsRequest {
  int32 id = 1;
}

message ListCommandsResponse {
  int32 id = 1;
  repeated string commands = 2;
}

message QuitRequest {
  int32 id = 1;
}

message QuitResponse {
  int32 id = 1;
  Error error = 2;
}

message BoardSizeRequest {
  int32 id = 1;
  int32 size = 2;
}

message BoardSizeResponse {
  int32 id = 1;
  Error error = 2;
}

message ClearBoardRequest {
  int32 id = 1;
}

message ClearBoardResponse {
  int32 id = 1;
}

message KomiRequest {
  int32 id = 1;
  float komi = 2;
}

message KomiResponse {
  int32 id = 1;
  Error error = 2;
}

message PlayRequest {
  int32 id = 1;
  Move move = 2;
}

message PlayResponse {
  int32 id = 1;
  Error error = 2;
}

message GenMoveRequest {
  int32 id = 1;
  Color color = 2;
}

message GenMoveResponse {
  int32 id = 1;
  Move move = 2;
}

message UndoRequest {
  int32 id = 1;
}

message UndoResponse {
  int32 id = 1;
  Error error = 2;
}

service GTP {

  // Version of the GTP Protocol
  rpc ProtocolVersion (ProtocolVersionRequest) returns (ProtocolVersionResponse) {}

  // E.g. “GNU Go”, “GoLois”, “Many Faces of Go”. The name does not include any version information. Use `version`.
  rpc Name (NameRequest) returns (NameResponse) {}

  // E.g. “3.1.33”, “10.5”. Engines without a sense of version number return the empty string.
  rpc Version (VersionRequest) returns (VersionResponse) {}

  // Returns “true” if the command is known by the engine, “false” otherwise
  rpc KnownCommand (KnownCommandRequest) returns (KnownCommandResponse) {}

  // Lists all known commands, including required ones and private extensions.
  rpc ListCommands (ListCommandsRequest) returns (ListCommandsResponse) {}

  // The session is terminated and the connection is closed.
  rpc Quit (QuitRequest) returns (QuitResponse) {}

  // Changes the board size. If the engine cannot handle the new size, fails with the error message ”unacceptable size”.
  rpc BoardSize (BoardSizeRequest) returns (BoardSizeResponse) {}

  // Clears the board, captured stones are reset, and move history is reset
  rpc ClearBoard (ClearBoardRequest) returns (ClearBoardResponse) {}

  // Changes the Komi
  rpc Komi (KomiRequest) returns (KomiResponse) {}

  // Plays the given move
  rpc Play (PlayRequest) returns (PlayResponse) {}

  // Asks the engine to generate a move, it will play it, and will return what was played
  rpc GenMove (GenMoveRequest) returns (GenMoveResponse) {}

  // The board and captured stones are reset to the previous move
  rpc Undo(UndoRequest) returns(UndoResponse) {}

}

Dotlin - A Kotlin DSL for Graphviz Dot

How Kotlin DSLs Work

TODO

About

This library is a Kotlin DSL for the Graphviz Dot language. Its syntax is fairly similar to standard Dot, with some
tweaks to get it to work as a Kotlin DSL. The library can be useful to generate graphs.

digraph {

    // Normal Kotlin statements
    val p = (0..9).partition { it % 2 == 0 }
    val evens = p.first
    val odds = p.second

    // Building a Dot graph
    for (i in evens) {
        +"$i" + { shape = DotNodeShape.SQUARE }
        odds.filter { abs(it - i) == 1 }.forEach { "$it" - "$i" }
    }
    for (i in odds) {
        +"$i" + { shape = DotNodeShape.CIRCLE }
        evens.filter { abs(it - i) == 1 }.forEach { "$it" - "$i" }
    }
}

Puzzle

graph {
    "a" - "b"
    "a" - "c"
    "a" - "d"
    "b" - "a"
    "b" - "d"
    "c" - "a"
    "c" - "d"
}

Example

Dot

digraph g {
	node [shape=plaintext];
	A1 -> B1;
	A2 -> B2;
	A3 -> B3;
	
	A1 -> A2 [label=f];
	A2 -> A3 [label=g];
	B2 -> B3 [label="g'"];
	B1 -> B3 [label="(g o f)'" tailport=s headport=s];

	{ rank=same; A1 A2 A3 }
	{ rank=same; B1 B2 B3 } 
}

Dotlin

digraph("g") {

    // Global node attributes
    node {
        shape = DotNodeShape.PLAINTEXT
    }
    
    // Edges
    "A1" - "B1"
    "A2" - "B2"
    "A3" - "B3"
    "A1" - "A2" + { label = "f" }
    "A2" - "A3" + { label = "g" }
    "B2" - "B3" + { label = "g'" }
    "B1" - "B3" + {
        label = "(g o f)'"
        tailport = DotPortPos.S
        headport = DotPortPos.S
    }

    +subgraph {
        rank = "same"
        +"A1"
        +"A2"
        +"A3"
    }

    +subgraph {
        rank = "same"
        +"B1"
        +"B2"
        +"B3"
    }
}

Usage

Basics

// This produces the Dreampuf default graph
// https://dreampuf.github.io/GraphvizOnline 

val g = digraph("G") {

    +subgraph("cluster_0") {
        node {
            style = "filled"
            color = "white"
            "a0" - "a1"
            "a1" - "a2"
            "a2" - "a3"
        }
        style = DotSubgraphStyle.FILLED
        color = "lightgrey"
        label = "process #1"
    }

    +subgraph("cluster_1") {
        node {
            style = "filled"
        }
        "b0" - "b1"
        "b1" - "b2"
        "b2" - "b3"
        color = "blue"
        label = "process #2"
    }

    "start" - "a0"
    "start" - "b0"
    "a1" - "b3"
    "b2" - "a3"
    "a3" - "a0"
    "a3" - "end"
    "b3" - "end"

    +"start" + {
        shape = DotNodeShape.DIAMOND
    }
    +"end" + {
        shape = DotNodeShape.MSQUARE
    }
}

println(g.dot())

Attributes

This library has all attributes listed in the Graphviz docs with corresponding types. Some attributes allow for multiple
types such as Boolean and String so you will find attributes with the suffixes "B" and "S" for Boolean and String
respectively.

See https://graphviz.org/doc/info/attrs.html

Graphs

graph { ... }

digraph { ... }

graph("name") { ... }

digraph("name") { ... }

Nodes

Add nodes in graphs using the + unary operator like the Kotlin HTML DSL.

graph {
   // basic nodes
   +"a"
   +"b"
   
   // add node attributes using + binary infix op
   +"c" + {
     color = "blue"
   }
}

Edges

You can add edges between nodes using the binary infix -. You can add edges between nodes and subgraphs. The graph
type -- graph vs digraph -- determines the edge type. In Dotlin, you just use - not "--" and "->".

graph {
    // Nodes with attributes
    +"a" + { color = "blue" }
    +"b" + { shape = DotNodeShape.TRAPEZIUM }
    
    // Basic Edge
    "a" - "b"
}

digraph {
    // Edge with attributes
    "a" - "z" + {
        color = "green"
        style = DotEdgeStyle.DASHED
    }
}

Subgraphs

Add subgraphs with +subgraph { ... } and the code within the subgraph block is the same as for a graph/digraph block.

graph {

    // Add subgraph to graph
    +subgraph {
        ...
    }
    
    // Subgraph as variable
    val sg = subgraph { ... }

    // Add it to the graph
    +sg
    
    // Subgraph as edge target
    "A" - subgraph { ... }
    
    // Subgraph as edge source
    subgraph { ... } - "B"
}

Reference

The library is based on the BNF given in the official docs. See https://graphviz.org/doc/info/lang.html

graph 	        : [ strict ] (graph | digraph) [ ID ] '{' stmt_list '}'
stmt_list       : [ stmt [ ';' ] stmt_list ]
stmt            : node_stmt
                  | 	edge_stmt
                  | 	attr_stmt
                  | 	ID '=' ID
                  | 	subgraph
attr_stmt 	: 	(graph | node | edge) attr_list
attr_list 	: 	'[' [ a_list ] ']' [ attr_list ]
a_list 	: 	ID '=' ID [ (';' | ',') ] [ a_list ]
edge_stmt 	: 	(node_id | subgraph) edgeRHS [ attr_list ]
edgeRHS 	: 	edgeop (node_id | subgraph) [ edgeRHS ]
node_stmt 	: 	node_id [ attr_list ]
node_id 	: 	ID [ port ]
port            : ':' ID [ ':' compass_pt ]
	            | 	':' compass_pt
subgraph 	: 	[ subgraph [ ID ] ] '{' stmt_list '}'
compass_pt 	: 	(n | ne | e | se | s | sw | w | nw | c | _)

Doomboard: Ben Moon's Worst Nightmare

Doomboard

Repo

Purpose

Helth

Overview

I screwed bits of wood to the bottom of my deck to create some form of climbing training. Next, I generated routes and collected them into a "guidebook". It's all choss.


Discovering Moves

I have twelve holds labeled A-L. I need to generate fewer than 1000 problems with each problem being reasonably possible.

The Move Matrix

A move, considering hands only, is determined by three holds. Which holds are your hands currently on, and which holds are you going to? One of the three holds becomes a connection between the start position and the end position. If we treat the positions as vertices, then the moves are edges, ex: AB -> BC. There are (12 choose 2) = 66 positions, but I do not want to include some. Some of these are unreasonable -- the holds are greater than arm's length or at funny angles. I hear you; I'm soft. Of the 66, I identified 58.

What are the moves?

I need to determine which positions are connected by a hold. This simply means checking for a shared character in two positions.

for i := 0; i < len(moves); i++ {
  for j := 0; j < len(moves); j++ {
    a := moves[i]
    b := moves[j]
    if  j != i && strings.ContainsAny(a, b) {
      fmt.Printf("%s -> %s", a, b)
    }
  }
}

I represented the moves as a weighted adjacency matrix where edge weight is the move's difficulty. I generated the adjacency matrix with row and column labels, then I tested all 515 moves in the upper right triangle. I rated each move's difficulty on a scale from 1 to 4. Each move is, ideally, reversible. This means the matrix is symmetrical, and I can complete the matrix by reflecting the upper right triangle.

The unweighted moves can be found in moves/moves_unweighted.txt. The weighted moves can be found in moves/moves_weighted.txt.

grep -o 1 ./moves_unweighted.txt | wc -l
1030
# -> 515 in upper triangle

NOTE: The 1-4 scale is shifted to 2-5. I used 1 as a placeholder. Also, nothing is "Loading". I'm a dunce and accidentally got that in the screenshot.

Generating Problems

I began by converting the adjacency matrix to an adjacency list. Then, I generated all walks of length 4-7 i.e. problems with 4-7 moves. This was way too many, so I added some limiting conditions.

  • No repeated holds (eliminates cycles)
  • Problems have exactly four moves
    • You can always suggest extensions/link-ups in the spray
  • After two positions, only walk two random subtrees
    • This more easily explained with a diagram

The last rule helps avoid generating a bunch of routes with long shared prefixes like so.

A B C D W
A B C D X
A B C D Y
A B C D Z

Results: 650 problems

Naming Problems

I downloaded a list of popular books on Goodreads. Famous titles make it easy to remember problems.

# Filter the list to shorter titles
gawk -v FPAT='[^,]*|("[^"]*")+' '{
  if (length($1) <= 20)
    print $1","$2
}' books.csv > books-short.csv

Grading

The goal was to assign problems to categories. If the book titles had genres associated with them, I would have used those. The grades were generated by averaging the difficulty of each move with random +1/-1 difficulty sprinkled in. Without adding a bit of randomness to the grading, 480/650 problems all had the difficulty 2. Grade A is the average, and grade B is the quarter subdivision of that average.

Grade B

I subdivided the "rounding interval" into four parts and assigned a letter a-d. For example, the numbers which average to 2 are in the rounding interval [1.5,2.5).

.50-.74 -> a
.75-.99 -> b
--- avg. ---
.00-.24 -> c
.25-.49 -> d

Examples

Avg Grade A Grade B
1.5 2 a
2.8 3 b
3.0 3 c
1.4 1 d

Distribution

cat problems.json | jq -j '.[] | .grade_a," ", .grade_b, "\n"' | sort | uniq -c
  77 1 1
  92 1 2
  51 1 3
  50 1 4
  67 2 1
  53 2 2
  88 2 3
  94 2 4
  10 3 1
   2 3 2
  46 3 3
  20 3 4

# count,  grade_a, grade_b (1-4 -> a-d)

Takeaways

Don't use Go as a scripting language. That was dumb. Python would've been 10x easier.

DanDan Project

I've been playing around with making a mobile frontend for Go engines. The project is private for now, but here's a preview. Ideally the frontend will support win projections from the engines and spheres of influence / estimated territory. After all, the point of this project is to create a frontend to train Go against different engines. Much like Katrain.

Preview

Complex Numbers in Go -> Mandelbrot Set Gif

Background

I'm currently doing exercises from [The Go Programming Language]. The complex number section uses the mandelbrot set as an example to illustrate how to use complex numbers. The exercises following this example ask you to add full-color and then implement super-sampling to decrease pixelation. I then generated a gif with various palettes. Bonus zoom video.

Before

Exercises

Extra

Source

package main

import (
	"image"
	"image/color"
	"image/color/palette"
	"image/gif"
	"math"
	"math/cmplx"
	"os"
)

const (
	frames = 60
	zoom   = 1
	side   = 512
	delay  = 20
)

func main() {
	anim := gif.GIF{LoopCount: frames}
	for i := 0; i < frames; i++ {
		anim.Delay = append(anim.Delay, delay)
		anim.Image = append(anim.Image, mandelbrotImage(i))
	}
	gif.EncodeAll(os.Stdout, &anim)
}

func mandelbrotImage(frame int) *image.Paletted {
	bound := 1.0
	shift := int(255 * math.Sin(math.Pi*float64(frame)/frames))
	img := image.NewPaletted(image.Rect(0, 0, side, side), palette.Plan9)
	for py := 0; py < side; py++ {
		y1 := (float64(py)+.25)/side*2*bound - bound
		y2 := (float64(py)-.25)/side*2*bound - bound
		for px := 0; px < side; px++ {
			x1 := (float64(px)+.25)/side*2*bound - 2*bound
			x2 := (float64(px)-.25)/side*2*bound - 2*bound
			z1 := mandelbrot(shift, complex(x1, y1))
			z2 := mandelbrot(shift, complex(x1, y2))
			z3 := mandelbrot(shift, complex(x2, y1))
			z4 := mandelbrot(shift, complex(x2, y1))
			r := (int(z1.R) + int(z2.R) + int(z3.R) + int(z4.R)) / 4
			g := (int(z1.G) + int(z2.G) + int(z3.G) + int(z4.G)) / 4
			b := (int(z1.B) + int(z2.B) + int(z3.B) + int(z4.B)) / 4
			c := color.RGBA{
				R: uint8(r),
				G: uint8(g),
				B: uint8(b),
				A: 255,
			}
			img.Set(px, py, c)
		}
	}
	return img
}

func mandelbrot(shift int, z complex128) color.RGBA {
	const iterations = 255
	var v complex128
	for n := 0; n < iterations; n++ {
		v = v*v + z
		if cmplx.Abs(v) > 2 {
			i := (shift + n) % 255
			return palette.Plan9[i].(color.RGBA)
		}
	}
	return color.RGBA{A: 255}
}

Go Flat Yourself

This is a library to flat map Go structs.

Library code

Background

I was writing an API client for a service which expected query strings and returned XML. I wanted to define each type of request's input and output as Go structs and have google/go-querystring and encoding/xml handle (un)marshaling.

The encoding/xml package worked for my purposes, but the google/go-querystring did not format nested keys as the service expected. I could have sent a PR to google/go-querystring, but this was not feasible for reasons unrelated to that package -- nonetheless, it's a good package with a fair number of options for output format.

Flat mapping Go structs is a fairly straightforward exercise with Go reflection and writing fewer than 100 lines-of-code was the path of least resistance. This library is slightly more general than google/go-querystring because it will output a string map rather than query params and will accept primitives in addition to structs.

Example

type AStruct struct {
    A string
    B []string
    C CStruct
}

type CStruct struct {
    D int
    E bool
}

s := &AStruct{
    A: "hello",
    B: []string{"x", "y", "z"},
    C: CStruct{
        D: 7,
        E: true,
    },
}

o := flat.Flatten(s)

// value of `o`
//   map[string]string{
//     "A": "hello",
//     "B.0": "x",
//     "B.1": "y",
//     "B.2": "z",
//     "C.D": "7",
//     "C.E": "true",
//   }

Notes

  • It will print values of pointers
  • If given a primitive of type p with value v, Flatten returns {"p": "v"}

SAC 843rd

We're Both Coming Through Fine

Hello?... Uh... Hello D- uh hello Dmitri? Listen uh uh I can't hear too well. Do you suppose you could turn the music down just a little?... Oh-ho, that's much better... yeah... huh... yes... Fine, I can hear you now, Dmitri... Clear and plain and coming through fine... I'm coming through fine, too, eh?... Good, then... well, then, as you say, we're both coming through fine... Good... Well, it's good that you're fine and... and I'm fine... I agree with you, it's great to be fine... a-ha-ha-ha-ha... Now then, Dmitri, you know how we've always talked about the possibility of something going wrong with the Bomb... The Bomb, Dmitri... The hydrogen bomb!... Well now, what happened is... ahm... one of our base commanders, he had a sort of... well, he went a little funny in the head... you know... just a little... funny. And, ah... he went and did a silly thing... Well, I'll tell you what he did. He ordered his planes... to attack your country... Ah... Well, let me finish, Dmitri... Let me finish, Dmitri... Well listen, how do you think I feel about it?... Can you imagine how I feel about it, Dmitri?... Why do you think I'm calling you? Just to say hello?... Of course I like to speak to you!... Of course I like to say hello!... Not now, but anytime, Dmitri. I'm just calling up to tell you something terrible has happened... It's a friendly call. Of course it's a friendly call... Listen, if it wasn't friendly... you probably wouldn't have even got it... They will not reach their targets for at least another hour... I am... I am positive, Dmitri... Listen, I've been all over this with your ambassador. It is not a trick... Well, I'll tell you. We'd like to give your air staff a complete run-down on the targets, the flight plans, and the defensive systems of the planes... Yes! I mean i-i-i-if we're unable to recall the planes, then... I'd say that, ah... well, ah... we're just gonna have to help you destroy them, Dmitri... I know they're our boys... All right, well listen now. Who should we call?... Who should we call, Dmitri? The... wha-whe, the People... you, sorry, you faded away there... The People's Central Air Defense Headquarters... Where is that, Dmitri?... In Omsk... Right... Yes... Oh, you'll call them first, will you?... Uh-huh... Listen, do you happen to have the phone number on you, Dmitri?... Whe-ah, what? I see, just ask for Omsk information... Ah-ah-eh-uhm-hm... I'm sorry, too, Dmitri... I'm very sorry... All right, you're sorrier than I am, but I am as sorry as well... I am as sorry as you are, Dmitri! Don't say that you're more sorry than I am, because I'm capable of being just as sorry as you are... So we're both sorry, all right?... All right.

Redpoint - Red River Gorge Climbing Guidebook

Redpoint

I wanted to create an iOS & Android application to serve as an offline guidebook to the Red River Gorge. Mountain Project is an existing mobile app guidebook, but the application suffers from incomplete route information and poor user experience. Mountain Project is excellent for its breadth of information, but I wanted to build a higher-quality application which was only for the Red River Gorge.

Gathering the Data

I started by scraping both Mountain Project and (Red River Climbing)[redriverclimbing.com]. The code for can be found here and here respectively. The data from Red River Climbing is significantly more complete, accurate, and up-to-date; so I used that.

The is simple. A region has multiple walls (area/crag); a crag has multiple routes. This is simple to model, and it is what I use in the latest version of application. When using the Mountain Project data, the relationship graph is a bit more complicated to represent and efficiently query with sqlite -- for this I built the transitive closure. I only include this because it was fun and interesting. You can find the sqlite database here and the table TSV's here.

Developing the App

The app is written with Google's cross-platform mobile-development framework called Flutter. At the time, the Flutter framework was rather new and the community had not formed experience backed opinions on architecture. Now, the BLoC pattern has gained some traction. I used a Model-View-Presenter architecture, and I see no need to change this. The app is essentially a convenient UI over the sqlite database with the addition of loading images and GPS. Bookmarks are stored with shared preferences -- so the database is read only.

Screenshots



Features

  • Offline route data
    • All data is stored in the tiny sqlite database within the app
  • Fast search
    • Fuzzy searching on both routes and crags executed per-key-down
  • Bookmark areas
    • You can place a bookmark in a book, why not an app!
  • Area directions
    • Each area
  • Sorting
    • Areas (Alpha, Size)
    • Routes (Alpha, L2R, R2L, Stars, Grade)

Notable Extras

  • Grade distribution histograms
    • Understanding the difficulty of a particular area determines where you go for the day! You don't want to accidentally go to a crag where it's too difficult. Having grade distribution histograms per crag gives you a quick peek into what that crag is like.
  • Sport/Trad route distinction
    • Sometimes you want to drink beer and rip gear. This feature distinguishes between trad and sport in the grade histograms.
  • Live GPS tracking for navigating to crags
    • Somewhat experimental and was a fun quick addition

How to Download?

Unfortunately, I do not own the data and I cannot publish this on the App and Play Stores. I reached out to the guidebook author, Ray Ellington, and he said he was working on an app and did not want to collaborate -- this broke my heart. Last time I checked (Feb 2020), photos could still not be saved offline. This feature should be a requirement. Ray says it is "impossible to include all of the photos" -- this is not an excuse to not develop a workaround solution. Mountain Project already has a solution, and I've include that in Redpoint. You simply include links to photos (~10KB for all of them) and cache them when either a user explicitly opts to download photos for an area or when a photo is loaded the first time. There are many other feature gaps as well.

Installing from Source

This is a Flutter app and the data has been neatly packaged into a sqlite database. The database can be found in RRC-Data/sqlite as well as Redpoint-App/assets.

  1. Follow the Flutter installation instructions
  2. In Redpoint-App do flutter run --release

Optionally, you can build the APK for Android and sideload it.

Fuzzy Prefix Searching with a Trie

Exercises in Text Prediction

The goal was to create a word suggestion tool (with some fuzzing) using a trie -- implemented in Go. The final code.

End Result

Success Criteria

  • Level 0: Prefix Searching
  • Level 1: Trie-ing Harder
  • Level 2: The Great Filter
  • Level 3: Fuzzy Finding for Fat-Fingered Friends
  • Level 4: Considering Edit Distances
  • Level 5: Ordered Set as a Finite State Automaton

Setup

First, I need some words. The system dictionary "/usr/share/dict/words" is pretty good, but it has 236k words whereas this English word frequency dataset from Kaggle has more words as well as their frequencies.

This dataset contains the counts of the 333,333 most commonly-used single words on the English language web, as derived from the Google Web Trillion Word Corpus.

The data comma-separated "word,freq" but I'm going to convert to json for quick loading (marshaling) into memory (Go map). Also, I'm going to replace the frequency count with the word's commonality ranking because I will be using a minheap and rankings give more context -- knowing how many times a word occurred in the corpus doesn't explicitly tell you its commonality relation to the 333k other words.

# Get list of words for building trie
awk -F , '{print $1}' original-dataset.csv > words.txt

# The words are in order by frequency -> the line number is the word's rank
awk -F , '
  BEGIN {print "{"}
  {printf "  \"%s\": %d,\n", $1, NR}
  END {print "}"}' words.txt > ranks.json

# trim the last trailing comma -- I could've used jq

L0: Prefix Searching

How can we find all words that start with "a"? What about words that start with "pre"?

grep "^a.*" words.txt

grep "^pre.*" words.txt

L1: Trie-ing Harder

A trie is a prefix tree where each node stores a set of child nodes with strings for the set's keys. String values are not stored in the nodes, but nodes are marked when they are the end of a word. For level 1, I will construct a trie from the dictionary (words.txt) and use that trie as my search index.

example trie

type Node struct {
  Children map[string]*Node
  IsWord   bool
}

Trie Construction

This method is straightforward. I iterate over the words, then iterate over the word's characters. I walk the trie character-by-character and create nodes when necessary. The last node corresponds to the last character of the word, so this node is marked with isWord = true.

for word in words.txt
  for character in word
    // traverse trie and set to current node
    if (node does not exist for this character)
      // create node and mark as current
  // mark current node as a word

Prefix Search

Search is similar to construction. Given a prefix, I walk the trie character-by-character. Once I have walked to the end of the prefix, I perform an unbounded search on the sub-trie and accumulate characters to form partial words. If I find a node marked as a word, I add the current value of the accumulator to the results list.

L2: The Great Filter

I can now return all of the words which share a common prefix, but I'd rather have fewer suggestions. I will filter for the most common words.

Word Frequency

Transforming the kaggle dataset into json makes this easy with json.Marshal.

Marshal

func GetWordRanks(ranksFilePath string) map[string]int {
	ranks := make(map[string]int)
	bytes, _ := ioutil.ReadFile(rankFilePath)
	_ = json.Unmarshal(bytes, &ranks)
	return ranks
}

Min Heap

Heaps are great for quickly returning n values in order. Using a heap makes return the most common words simple.

func CommonWords(ranks map[string]int, words []string, n int) []string {

	h := &wordHeap{}
	heap.Init(h)

	// Add all words to the heap log(n) time
	for _, w := range words {
		heap.Push(h, &word{
			text: w,
			rank: ranks[w],
		})
	}

	heapLen := h.Len()
	if heapLen < n {
		n = heapLen
	}

	// Return top n words from the heap
	results := make([]string, n)
	for i := 0; i < n; i++ {
		results[i] = (*h)[i].text
	}
	return results
}

L3: Fuzzy Finding For Fat-Fingered Friends

I will attempt to improve the search by fuzzing prefixes. I can do this by defining a set of boundary characters for each key on the keyboard and searching the trie with those boundary characters. For example, the boundary characters of "a" on a qwerty keyboard are {"q", "w", "s", "z"}. These are the characters that are likely to be accidentally typed instead of "a".

The fuzzy trie search is done, for now, in two steps.

Setup the DFS

I search the trie for each permutation of the given prefix. I permute each character with its boundary characters. Once I reach a permutation that exists in the trie, I add it to the stack for the depth first search.

DFS

The DFS is the exact same as the standard prefix search from before. I search through the trie and add each word I encounter to the results list.

Why this isn't good?

Efficiency aside, this isn't good because the permutations of a prefix result in searches for prefixes that often have more common words than the words for the given prefix.

Example: cat
This should return categories, catalog, category, ... in that order. However, "date" is the top result because "dat" is a prefix permutation of "cat" and it is more common than all of the results with the prefix "cat".

This fuzzing method assumes that typing accuracy is evenly distributed amongst the intended character and its boundary characters. In practice, people's typing accuracy is much higher than ~20% (I sure hope it is). I need to account for a permutation's edit distance to the prefix in the results ranking.

L4: Considering Edit Distances

In computational linguistics and computer science, edit distance is a way of quantifying how dissimilar two strings (e.g., words) are to one another by counting the minimum number of operations required to transform one string into the other. source

In this context, I am only concerned about the number of different characters. More precisely, the Hamming distance.

Fuzzy Searching with Edit Distance and Commonality Ranking Filters

I've added the edit distance filter to Setup the DFS step of the fuzzy trie search. Doing this increases performance because I will not search down trie paths for words I would later filter out.

The rule is simple, yet effective. I only search for words whose permuted prefix has an edit distance less than or equal to 20% of the input prefix's length (with rounding). For example, if you prefix is 6 characters long, the search will include results whose prefix differs by up to 2 characters. In code:

if editDistance(permutedPrefix) <= 0.2 * len(inputPrefix) {
  // add to stack for the DFS
}

This assumes ~80% typing accuracy which seems low for physical keyboards but fair for touch keyboards. Also, keeping the assumed typing accuracy low increases the fuzziness of the search which also helps with spelling mistakes.

L5: Ordered Set as a Finite State Automaton

Read the Burntsushi FSA/FST post! It's magic.

Implementing a finite state automata backed ordered set for searching prefixes is a bit too difficult for the scope of this exercise. However, they are absolutely worth discussing! I've been using a trie which does not combine suffixes. An FSA combines suffixes to the overall data structure would be much smaller. We would also be able to perform better fuzzy searches. Here is the post's excerpt about fuzzy searching an FSA using Levenshtein automata. Note that "Levenshtein Distance" and "Edit Distance" are the same thing, but I only considered Hamming distance in my implementation.

Given a collection of strings, one really useful operation is fuzzy searching. There are many different types of fuzzy searching, but we will only cover one here: fuzzy search by Levenshtein or “edit” distance.

For our purposes, the question we’d like to answer is: does this key match any of the keys in the set up to an Levenshtein distance of n?
Certainly, we could implement an algorithm to compute Levenshtein distance between two strings, iterate over the keys in one of our FST based ordered sets, and run the algorithm for each key. If the distance between the query and the key is <= n, then we emit it as a match. Otherwise, we skip the key and move on to the next.

The problem with this approach is that it is incredibly slow. It requires running an effectively quadratic algorithm over every key in the set. Not good.

It turns out that for our specific use case, we can do a lot better than that. Namely, in our case, one of the strings in every distance computation for a single search is fixed; the query remains the same. Given these conditions, and a known distance threshold, we can build an automaton that recognizes all strings that match our query.

Why is that useful? Well, our FST based ordered set is an automaton! That means answering the question with our ordered set is no different than taking the intersection of two automatons. This is really fast.

Wonkavision - Screen-printing a Movie Scene

Wonkavision

wonka-bar

WONKA: Wonkavision - My very latest and greatest invention.

MIKE: It's television.

WONKA: Uh, it's Wonkavision.  Now I suppose you all know how ordinary television works. You photograph something and--

MIKE: Sure, I do. You photograph something and then the photograph is split up into millions of tiny pieces, and they go whizzing through the air down to your TV set where they're all put together again in the right order.

WONKA: You should open your mouth a little wider when you speak.  So I said to myself, "If they can do it with a photograph, why can't I do it with a bar of chocolate?"  I shall now send this chocolate bar from one end of the room to the other.  It has to be big because whenever you transmit something by television, it always ends up smaller on the other end.  Goggles on, please.  Lights, camera, action!

MRS. TEAVEE: (screams)

WONKA: You can remove your goggles.

CHARLIE: Where's the chocolate?

WONKA: It's flying over our heads in a million pieces. Now watch the screen.  Here it comes. There it is. Take it.

MIKE: How can you take it?  It's just a picture.

WONKA: All right, you take it.

CHARLIE: It's real.

WONKA: Taste it; it's delicious. It's just gotten smaller, that's all.

CHARLIE: It's perfect.

MRS. TEAVEE: It's unbelievable.

GRANDPA JOE: It's a miracle.

MIKE: It's a TV dinner.

WONKA: It's Wonkavision.

TL;DR

  • Movie Scene
  • Mike Teavee claims television works by splitting data into millions of tiny pieces, which whiz through the air down to your TV set, and are put together again in the right order.
  • The data has to be big because whenever you transmit something by television, it always ends up smaller on the other end.

In the words of Willy Wonka himself... if they can do it with a photograph, why can't I do it with a bar of chocolate the movie screen printed onto a 4'x'3 bar of chocolate?

Screen Printing

Screen printing is a method to print images by hand. A print is composed of multiple layers that you create by passing ink through a screen which has stencils burned into it.

Example

From Wiki

Screen printing is a printing technique where a mesh is used to transfer ink onto a substrate, except in areas made impermeable to the ink by a blocking stencil. A blade or squeegee is moved across the screen to fill the open mesh apertures with ink, and a reverse stroke then causes the screen to touch the substrate momentarily along a line of contact. This causes the ink to wet the substrate and be pulled out of the mesh apertures as the screen springs back after the blade has passed. One color is printed at a time, so several screens can be used to produce a multicolored image or design.

Project Goal

Encode the Wonkavision scene into a "chocolate bar" with the ability to decode the scene i.e. "put [it] together again in the right order."

Disclaimer: I wrote this about eight months after doing the project so some figures may be off -- but the figures do not need to be precise to communicate the project's challenges and intentions effectively.

Challenges

  • How can I encode as much information as possible onto a canvas?
  • What dpi/resolution can I screen print?
  • What is the best way to decode my data?
  • What if there are errors in my printing?
  • Images take up a lot of space
  • Screen printing on a large canvas is much harder than a small canvas

Encoding the Data

My canvas is 42" x 33" because I will save some room for stylistic padding. This is typical for screen printing. My screen has a mesh count (threads per inch) of 280, which translates to about 110 dots-per-inch (DPI). The amount of data I can store in 110 DPI is determined by how many colors I use. In the context of Information Theory, this is the radix.

Two Colors?

If I use two colors, white & black, then a single dot is analogous to a bit. This simple isomorphism results in 110^2 ~ 1.5kB per inch. There are 42x33=1386 in^2 of the printing surface, and I can fit 2.1 MB on the canvas. This is bad news considering the compressed one minute movie scene is 8.4 MB, which is already impressively small for a video.

256 Colors?

What if we made a dot on our canvas equivalent to a byte by using 256 different colors? The number of dots~bytes we can fit on a page is 110^2*42*33 which is 16.8 MB (8x the previous calculation). Great news! We can fit the scene. Unfortunately, screen printing 255 layers is incredibly impractical and would take hundreds of hours and cost nearly a thousand dollars in transparent stencils and ink. This is not an option.

16 Colors?

If we make the color of a dot corresponds to a hexadecimal value 0-15, then we can encode a byte in every two dots. This would allow us to store half as much data as 256 colors, which is exactly 8.4 MB -- enough to store the scene! Crazy how that works out (;

Unfortunately, there is a problem. I could print 15 layers (paper-white is a color). It would be exhausting, but it is feasible. The real problem is that this encoding does not have error correction and does not have a convenient way to decode. To decode this, I would have to create a scanner that can recognize the 16 different colors as well as be resilient to variations in the colors. This is possible; tricky, but possible. The real concern is the lack of error correction because the printing process is manual, and it is significantly harder to screenprint on a massive canvas such as this.

QR Codes

This project requires that I can quickly decode data in an encoding that has redundancy. How about a bar code? How about a 2D bar code.. how about .. a QR Code! There are many types of QR codes, but I want the one that can hold the most data. This is a 40x40 QR code which has the dimensions of 177x177. So I cannot make them 1"x1" since my DPI is only 110 .. more on this later. You can find QR code details here. In a 40x40 QR code, I have four redundancy options with the capacity depending on which type of data I encode.

Redundancy Binary (bytes) Alphanumeric (chars)
L 2,953 4,296
M 2,331 3,391
Q 1,663 2,420
H 1,273 1,852

I must compromise. I cannot make the QR codes 1-inch because the DPI of my screen is too low. Therefore, I must make the QR codes 2-inches which decreases my total capacity by 4. Ugh. So, instead of storing the whole movie scene and having to connect the chunked file, I am going to store one frame per QR code. In practice, this is much nicer because I can decode a single QR code and see the result. In the other case, you would have to scan every QR code on the canvas before getting the result.

Which QR code size is best?

Screen printing allows you to print impressively high-resolution images, so printing and scanning codes should be no problem. Through trial and error, I tested how much data I could put it into as little space as possible. I created a grid of twelve QR codes based upon two features: size and error correction level. I then manually screen printed these and tested scanning them with my phone. The results are below. An "x" denotes that the code scanned successfully.

Err. Corr. Lvl. 1 in. 2 in. 3 in.
Low x
Med x
Q x x
High x x

I was pleasantly surprised with the results; however, this wasn't a perfect test. I used a smaller screen with a higher resolution (threads per inch). These trials were intentionally not perfect and were done for ballparking and assessing feasibility. The smaller screen is much easier to print with, and I hadn't received the larger screens at the time of testing. The thread count of my smaller screen is marginally higher than the larger screen's thread count, so the trials were still very useful. I decided to move forward with the 2-inch codes with high error correction because they maximize storage space while being the most resilient to printing errors. I did not want to use the Q level error-correcting codes, because the medium level 2-inch codes do not scan when printed with the higher-resolution screen. Using codes with higher redudancy grants me a safety buffer.

Decoding the Data

I wanted to encode the bytes of a png into the QR and have a scanner that reads the bytes and renders the image. For this, I modified an example QR reader mobile app which you can find in the viewer/ directory. It is a Flutter application so you can run it on both iOS and Android. My change is simply:

# pseudocode
void onCodeRead(String data) {
 pngBytes = base64Decode(data)
 renderImage(pngBytes)
}

Why is the data in base64?

The QR reader library did not play well with raw bytes. Data was encoding and decoded through the following procedure:

  1. Convert png image to base64
  2. Encode in QR code
  3. Screen Print
  4. Scan as ascii
  5. Decode base64
  6. Render png

Constructing the Image

Now that I've solved compromised on the problem, I need to gather my images and prepare them. I need to compress and image so that it is only a single kilobyte and able to fit into a QR code. This is quite tough and involved massively downsizing the image and downsampling the colors. In the wonka/assets directory you can find the full steps for producing these images in the readme. The process outline is:

# Take 5 frames per second from the video
ffmpeg -i ./video.mp4 -vf fps=5 ./frames/%04d.jpg

# Cropping 1280x720 to 720x720
# - Offset x-coordinate by (1280-720)=560
# - Convert to png
ffmpeg -i image.jpg -vf "crop=720:720:560:0" cropped.png

# Scale cropped images
ffmpeg -i cropped.png -vf scale=75:75 "scaled.png"

# Downsample Images to 4 colors
# pngquant without dithering gives best compression size
pngquant --nofs --speed=1 --output="./compressed/scaled.png" 4 "downsampled.png"

# Convert to base64
base64 ./downsampled.png > image.txt

Here's a before and after. I quite like the pixel-art look of the resultant images.

Before

After

The Process

I wrote a simple python script to generate an appropriately sized grid of 2" QR codes at the HIGH redundancy level based upon the converted images from the previous step.

Base

Bar Base

Apply Style

Bar

Decoded View

Decoded

Screen Printing


Bonus Mini Print

I decided to also make a "to-scale" Wonkabar with a reference to the project.

Interesting Extras!

Three Color QR Codes

I am now able to quickly decode the information as well as encode with error correction to account for errors in the printing process. However, I have 1/4 the capacity because a 2" QR code takes up 4x as much space as a 1" QR code. What if I can encode two images into the space of a single QR code? I got this idea from anaglyph 3D (blue & red glasses) where you can put two different images (perspectives) together and filter one out for each eye. Let's try this with QR codes! The images below correspond to a blue lens filter, red lens filter, and no filter.

Filter

Combined

Try these out with your phone's camera, and if you have 3D glasses, you should be able to scan both codes on the combined code where the data in the code matches the color of the filter you use. What about for a 40x40?

40x40

For the brave few that scanned the 40x40, the data may look a bit odd; it's in base64.

Screen Printing 3D QR Codes

It is important to understand color models to make sense of the challenges of screen printing anaglyph images. The two-color models I have been working with are RGB (displays) and CMYK (print).

RGB ~ Red, Green, Blue
CMYK ~ Cyan, Magenta, Yellow, blacK

The first is an additive model, and the second is a subtractive model. The type of model describes how colors combine. In the RGB model, mixing two colors adds their intensities together. On the other hand, colors mixed in the CMYK model subtract from one another's intensity. More detail on CMYK here. I will represent RGB colors by RGB(0-255,0-255,0-255) and CMYK colors by CMYK(0-255,0-255,0-255,0-255). Examples:

# red
RGB(255, 0, 0) ~ #FF0000 ~ CMYK(0, 255, 255, 0)

# cyan
CMYK(255, 0, 0, 0) ~ RGB(0, 255, 255)

# white! -- odd right? we'll see how this plays out
RGB(255, 255, 255) ~ CMYK(0, 0, 0, 0)

Notice how cyan in the CMYK model corresponds to full green and blue in the RGB model. Anaglyph 3D glasses are designed to filter light coming from displays (tv's, computers, etc.) in the RGB model. One eye gets all of the red, and the other eye gets the green and blue. Your brain combines these images into full-color. If I want to make my scanning work for both colors, I need to print perfect red and cyan.

There is a technique in screen printing generally referred to as CMYK in which the CMYK color is used to produce full-color images. This is the same technique used by inkjet printers (cartrige). One must print four layers -- a layer for each base color -- and then the layers are combined into a full-color result.
example

We use "process" colors to produce CMYK images. A process color is a pure base color meant to be combined in a multi-layer printing process. This means I can get perfect cyan by using the process cyan. Fortunately, our shop had a pure red, so I did not need to mix the process magenta and process yellow. At this point, things are lining up.

I printed some 3D QR codes, and the red scanned, but the cyan was not being entirely filtered out. This meant I could only scan one of the two codes. It then occurred to me that the process cyan is premixed with transparent base (a clear ink to make the color moderately transparent) which was allowing some of the white into the mix. This is a bit odd, but here's my analysis.

This issue is easier to discover when using the RGB model because it more closely shows what gets passed through the glasses/filters.

# Let L = canvas color
# Let R = layer color
# Let T = layer color alpha (tranparency) [0.0-1.0]
# 
# Weighted Avg. Model:
# L + R = (1 - T) * L + T * R

Example

# Add blue at 50% transparency to a green canvas.
.5 * RGB(0, 255, 0) + .5 * RGB(0, 0, 255) = RGB(0, 128, 128)

Combining Colors

After looking more into the makeup of process cyan, I see why the filtering fails. Take a look here. Wiki says process cyan has the profile CMYK(255, 56, 0, 20). Note, the wiki page normalized the color on a scale of 0-100 and I scaled it to 0-255 for consistency. The first problem I see is that process cyan has a bit of magenta. The second problem is that process cyan is a bit transparent with an alpha of ~0.85 which corresponds to about 15% transparency. What happens when this is printed onto paper-white?

# Paper-white + process cyan
0.15 * CMYK(0, 0, 0, 0) + 0.85 * CMYK(255, 56, 0, 20) = CMYK(217, 47, 0, 17)

# Convert process cyan into rgb
CMYK(255, 56, 0, 20) ~ RGB(0, 183, 235)

# Same calculation in RGB space
0.15 * RGB(255, 255, 255) + 0.85 * RGB(0, 183, 235) = RGB(38, 194, 238) 

If you look at the RGB calculation, the problem is obvious. The resultant value for the red component (38) is non-zero! What happens when I apply my cyan filter when trying to scan a code?

# process cyan on paper-white with a cyan filter
RGB(38, 194, 238) - RGB(0, 255,  255) = RGB(38, 0, 0)

I've recreated this digitally by adjusting the cyan in my combined QR codes to the color value of process cyan printed on paper-white. This is what the scanner sees with no filter, red filter, and a cyan filter.

The lingering little blue squares in the third panel have too much contrast against the background, so the scanner fails to read them.

I know that this method can be done, but with the inks and time I had, I could not pull it off. To make this work, I needed a non-transparent process cyan so that no white from the paper leaked through the filter. But this is a blessing in disguise! This method limited my use of color, so it would have been rather difficult to recreate a colorful Wonkabar with these red and cyan codes.

Compact Easel

The goal was to make an stowable easel with adjustable capacity. I wanted it to be fold-able without having to disconnect parts. My roommate and I brainstormed some three-way hinge designs, and this Y-shaped worked well and was easy to cut with the router using a cup as a guide for the curve. There is a hinge at the top to connect the top of the A-frame, and a hinge underneath the Y to connect the support leg. The A-frame is completed by a small shelf with two routed channels so that you can adjust the capacity. The easel can be folded with the shelf turned vertically and then tightened to secure the easel in its folded form.

If I had done the project a few weeks later, I would've learned something from Naoko's easel.

Ktor Protobuf Serialization and Deserialization

I enjoy using Protobufs to generate data objects in different languages with minimal effort, and I want to use Ktor as a RESTful service framework. Consider Swagger as an alternative. This is what is required for protobuf serde with Ktor. I wasn't able to find this in the Ktor docs.

Converter

class ProtobufConverter : ContentConverter {
    override suspend fun convertForReceive(context: PipelineContext<ApplicationReceiveRequest, ApplicationCall>): Any? {
        val request = context.subject
        val channel = request.value as? ByteReadChannel ?: return null
        return channel.readRemaining().readBytes()
            .let {
                request.type.javaObjectType.getMethod("parseFrom", ByteArray::class.java).invoke(null, it)
            }
    }

    override suspend fun convertForSend(
        context: PipelineContext<Any, ApplicationCall>,
        contentType: ContentType,
        value: Any
    ): Any? {
        if (value !is MessageLite) return null
        return ByteArrayContent(value.toByteArray(), contentType)
    }
}

Content Negotiation and Routing

fun Application.configueRouting(service: ServiceImplementation) {
    install(ContentNegotiation) {
        register(ContentType.Application.ProtoBuf, ProtobufConverter()) // application/protobuf
    }
    routing {
        post("/my-api") {
            val req = call.receive<MyReq>()    // MyReq protobuf message
            val res = service.myApi(req)       // MyRes protobuf message
            call.respond(res)
        }
    }
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.