Maurice Herlihy: The Art of Multiprocessor Programming



book cover A lot of times I was interested how parallel code works and how organized classic parallel data structures. So I decided to read The Art of Multiprocessor Programming by Maurice Herlihy and I read almost what I wanted to read. The books widely explains locks and other concurrent primitives, parallel data structures and some best practices. Although the book is a bit classbook-like, most parts aren’t boring to read. And as a good thing all chapters contains exercises.

As a drawback, this book contains information mostly only about mutable data structures.

Little app for learning Czech word's endings



czgramma screenshot

In the previous September I started learning Czech language, it’s interesting and not so hard, but a few things are really pain in the ass. One of them – word’s endings. They varies between declinations, kinds and forms. And there’s a lot of exceptions. So I decided to write a little app, that would grab random texts from internet, replace word’s endings with inputs and check correctness of values in inputs. And also I wanted make app as small as possible, ideally even server-less.

As a data source I chose Czech Wikipedia, it’s API has all I need:

We can easily try this API with curl:

# Get random article title:
➜ curl "https://cs.wikipedia.org/w/api.php?action=query&list=random&rnlimit=1&rnnamespace=0&format=json"
{"batchcomplete":"","continue":{"rncontinue":"0.126319054816|0.12632136624|1223411|0","continue":"-||"},"query":{"random":[{"id":1248403,"ns":0,"title":"Weinmannia"}]}}%

# Get article summary by title:
➜ curl "https://cs.wikipedia.org/w/api.php?action=query&prop=extracts&exintro=&explaintext=&titles=Weinmannia&format=json"
{"batchcomplete":"","query":{"pages":{"1248403":{"pageid":1248403,"ns":0,"title":"Weinmannia","extract":"Weinmannia je rod rostlin z \u010deledi kunoniovit\u00e9 (Cunoniaceae). Jsou to d\u0159eviny se zpe\u0159en\u00fdmi nebo jednoduch\u00fdmi vst\u0159\u00edcn\u00fdmi listy a drobn\u00fdmi kv\u011bty v klasovit\u00fdch nebo hroznovit\u00fdch kv\u011btenstv\u00edch. Rod zahrnuje asi 150 druh\u016f. Je roz\u0161\u00ed\u0159en zejm\u00e9na na ji\u017en\u00ed polokouli v m\u00edrn\u00e9m p\u00e1su a v tropick\u00fdch hor\u00e1ch. Vyskytuje se v Latinsk\u00e9 Americe, jihov\u00fdchodn\u00ed Asii, Tichomo\u0159\u00ed, Madagaskaru a Nov\u00e9m Z\u00e9landu.\nRostliny byly v minulosti t\u011b\u017eeny zejm\u00e9na jako zdroj t\u0159\u00edslovin a d\u0159eva k v\u00fdrob\u011b d\u0159ev\u011bn\u00e9ho uhl\u00ed. Maj\u00ed tak\u00e9 v\u00fdznam v domorod\u00e9 medic\u00edn\u011b."}}}}%

For frontend I used React with Redux and Material-UI. I didn’t wanted to mess with webpack/etc configs, so I just used Create React App. And it works pretty well.

Mostly implementation isn’t interesting, it’s just a standard react+redux app. But there was a few not so obvious problems.

The first was finding word’s endings, there’s no tools for nlp for Czech in JavaScript world. But the language has limited set of word’s endings, so hardcoding them and checking every long enough word works:

const ENDS = [
  'etem',
  'ého',
  ...
  'ě',
];

(word) => {
  if (word.length < 4) {
    return null;
  }

  for (const end of ENDS) {
    if (word.endsWith(end) && (word.length - end.length) > 3) {
      return end;
    }
  }

  return null;
}

Another problem was extracting words from text, regular expressions like (\w+) doesn’t work with words like křižovatka. And also I need to extract not only words, but a symbols before and after the word. So I used ugly regexp:

(part) => {
  const matched = part.match(/([.,\/#!$%\^&\*;:{}=\-_`~()]*)([^.,\/#!$%\^&\*;:{}=\-_`~()]*)([.,\/#!$%\^&\*;:{}=\-_`~()]*)/);
  return matched.slice(1);
}

The last non-obvious part was to make client side routing work properly on heroku (on page refresh, etc). But it was solved by just adding static.json with content like:

{
  "root": "build/",
  "clean_urls": false,
  "routes": {
    "/**": "index.html"
  },
  "https_only": true
}

And that’s all. Result app is a bit hackish and a bit ugly, but it’s useful, at least for me.

App, sources on github.

Soundlights with Raspberry Pi and NeoPixel Strip



About a month ago I bought some NeoPixel clone and decided to create a garland/soundlights for Christmas.

TLDR:

The first problem was to analysing of audio stream in real time, so I found a nice console audio visualizer – cava. And changed it a bit, now instead of showing nice looking bars through ncurses it just prints height of bars. It was done with a little patch:

diff --git a/cava.c b/cava.c
index 48482d6..13e7ce1 100644
--- a/cava.c
+++ b/cava.c
@@ -792,13 +792,6 @@ as of 0.4.0 all options are specified in config file, see in '/home/username/.co
                        f[i] = 0;
                }
 
-               #ifdef NCURSES
-               //output: start ncurses mode
-               if (om == 1 || om ==  2) {
-                       init_terminal_ncurses(color, bcolor, col, bgcol);
-                       get_terminal_dim_ncurses(&w, &h);
-               }
-               #endif
 
                if (om == 3) get_terminal_dim_noncurses(&w, &h);
 
diff --git a/output/raw.c b/output/raw.c
index b3b9d1e..16196b9 100644
--- a/output/raw.c
+++ b/output/raw.c
@@ -5,6 +5,13 @@
 
 int print_raw_out(int bars, int fp, int is_bin, int bit_format, int ascii_range, char bar_delim, char frame_delim, int f[200]) {
        int i;
+        for (i = 0; i <  bars; i++) {
+               uint8_t f8 = ((float)f[i] / 10000) * 255;
+               printf("%d ", f8);
+        }
+        printf("\n");
+        return 0;
+
 
        if (is_bin == 1){//binary
                if (bit_format == 16 ){//16bit:

Created special config where bars equals to count of leds on the strip:

[general]
bars = 60

[output]
bit_format = 8bit
method = raw
style = mono

And it works:

➜  cava git:(master) ✗ ./cava -p soundlights/cava_config
15 22 34 22 15 11 16 11 8 7 11 8 12 8 8 5 4 6 4 5 4 5 4 4 6 10 8 12 8 12 15 14 9 11 9 14 21 14 20 30 20 16 19 17 13 14 10 9 8 9 8 5 5 7 7 11 10 11 8 7 
25 38 58 38 25 18 28 20 14 17 26 18 21 14 16 13 9 10 8 11 7 9 11 9 14 21 21 20 15 23 26 24 17 23 17 24 37 24 35 52 35 28 32 30 22 24 18 16 14 16 16 11 10 15 12 18 18 19 14 14 
33 50 75 50 34 24 36 27 19 24 36 24 27 20 27 19 13 13 12 15 15 13 15 12 19 28 31 26 20 31 33 31 23 32 22 31 47 31 45 67 45 36 42 38 28 31 23 21 19 21 21 16 14 20 16 24 23 25 19 18 
39 58 87 58 47 32 41 32 22 29 44 29 31 24 34 24 19 15 15 18 20 15 18 15 22 34 38 30 24 37 39 36 28 38 26 36 55 36 52 78 52 42 48 44 33 36 26 24 22 24 25 19 17 23 19 28 27 29 22 21 

Next problem was converting of bars heights to colors, I made it simple as possible. Max bar height is 256, so I generated 256 colors with code found on Quora:

def _get_spaced_colors(n):
    max_value = 16581375
    interval = int(max_value / n)
    colors = [hex(i)[2:].zfill(6) for i in range(0, max_value, interval)]

    return [(int(i[:2], 16), int(i[2:4], 16), int(i[4:], 16)) for i in colors]

After that I was need to changing led colors from Raspberry Pi, first of all I connected the strip to the board. In documentation they say that the strip should be connected with some level converter and should use external power, but mine works just fine connected straight to Raspberry Pi:

  • ground → ground PIN;
  • power → 3.3V PIN;
  • logic → GPIO PIN 18.

As a software part I used Python library rpi_ws281x and created little app, that reads from stdin and changes leds colors:

import sys
from neopixel import Adafruit_NeoPixel, Color

# LED strip configuration:
LED_COUNT = 60  # Number of LED pixels.
LED_PIN = 18  # GPIO pin connected to the pixels (must support PWM!).
LED_FREQ_HZ = 800000  # LED signal frequency in hertz (usually 800khz)
LED_DMA = 5  # DMA channel to use for generating signal (try 5)
LED_BRIGHTNESS = 255  # Set to 0 for darkest and 255 for brightest
LED_INVERT = False # True to invert the signal (when using NPN transistor level shift)

# Colors:
COLORS_COUNT = 256
COLORS_OFFSET = 50


def _get_strip():
    strip = Adafruit_NeoPixel(
        LED_COUNT, LED_PIN, LED_FREQ_HZ, LED_DMA, LED_INVERT, LED_BRIGHTNESS)

    strip.begin()

    for i in range(strip.numPixels()):
        strip.setPixelColor(i, Color(0, 0, 0))

    strip.setBrightness(100)

    return strip


def _get_spaced_colors(n):
    max_value = 16581375
    interval = int(max_value / n)
    colors = [hex(i)[2:].zfill(6) for i in range(0, max_value, interval)]

    return [(int(i[:2], 16), int(i[2:4], 16), int(i[4:], 16)) for i in colors]


def _handle_stdin(colors, strip):
    while True:
        try:
            nums = map(int, sys.stdin.readline()[:-1].split())
            for i, num in enumerate(nums):
                strip.setPixelColor(i, Color(*colors[num]))

            strip.show()
        except Exception as e:
            print e


if __name__ == '__main__':
    _handle_stdin(_get_spaced_colors(COLORS_COUNT),
                  _get_strip())

For sending data to Raspberry Pi I just used ssh, like:

./cava -p soundlights/cava_config | ssh pi@retropie.local sudo python soundlights.py

But there was a problem with buffering, but unbuffer from expect solved it:

unbuffer ./cava -p soundlights/cava_config | ssh pi@retropie.local sudo python soundlights.py

And that’s all.

Sources on github.

Dave Fancher: The Book of F#



book cover Few months ago I decided to try something different, like different platform and different language. As I mostly work with Python/JS/JVM langs, I chose .NET and F#. So I read The Book of F# by Dave Fancher from Joy of Coding bundle. And it was interesting. In the book aspects of F# are nicely explained with examples. Also book covers non-straightforward parts of the language, and explains why it was made that way.

Although author of the book assumed that readers would be familiar with .NET platform and C#, it wasn’t a problem at all.

Site Reliability Engineering: How Google Runs Production Systems



book cover white Not so long ago I wanted to read something about DevOps and processes in real organisations. So I chose Site Reliability Engineering: How Google Runs Production Systems. And it nicely explains about deploy, failures recovery, support and other SRE aspects from engineering and management points of view. Also it interesting to read about problems and solutions of huge systems.

However the book is informative, but it’s a bit boring. And almost most of cases are Google specific or can be a problem only on a very large systems.

AST transformations with __future__-like module



In the previous article I wrote how-to add partial application with ... and piping with @ using AST transformations. However we needed to transform AST manually. For automatizing it I planned to use macropy but it doesn’t work with Python 3 and a bit too complicated. So I ended up with an idea to create __transformers__ module that work in a similar way with Python’s __future__ module. So code will look like:

from __transformers__ import ellipsis_partial, matmul_pipe


range(10) @ map(lambda x: x ** 2, ...) @ list @ print

So first of all for implementing it we need to extract enabled transformers names from code, it’s easy with ast.NodeVisitor, we just process all ImportForm nodes:

import ast


class NodeVisitor(ast.NodeVisitor):
    def __init__(self):
        self._found = []

    def visit_ImportFrom(self, node):
        if node.module == '__transformers__':
            self._found += [name.name for name in node.names]

    @classmethod
    def get_transformers(cls, tree):
        visitor = cls()
        visitor.visit(tree)
        return visitor._found

Let’s run it:

tree = ast.parse(code)

>>> print(NodeVisitor.get_transformers(tree))
['ellipsis_partial', 'matmul_pipe']

Next step is to define transformers. Transformer is just a Python module with transformer variable, that is instance of ast.NodeTransformer. For example transformer module for piping with matrix multiplication operator will be like:

import ast


class MatMulPipeTransformer(ast.NodeTransformer):
    def _replace_with_call(self, node):
        """Call right part of operation with left part as an argument."""
        return ast.Call(func=node.right, args=[node.left], keywords=[])

    def visit_BinOp(self, node):
        if isinstance(node.op, ast.MatMult):
            node = self._replace_with_call(node)
            node = ast.fix_missing_locations(node)

        return self.generic_visit(node)


transformer = MatMulPipeTransformer()

Now we can write function that extracts used transformers, imports and applies it to AST:

def transform(tree):
    transformers = NodeVisitor.get_transformers(tree)

    for module_name in transformers:
        module = import_module('__transformers__.{}'.format(module_name))
        tree = module.transformer.visit(tree)

    return tree

And use it on our code:

from astunparse import unparse

>>> unparse(transform(tree))
from __transformers__ import ellipsis_partial, matmul_pipe
print(list((lambda __ellipsis_partial_arg_0: map((lambda x: (x ** 2)), __ellipsis_partial_arg_0))(range(10)))

Next part is to automatically apply transformations on module import, for that we need to implement custom Finder and Loader. Finder is almost similar with PathFinder, we just need to replace Loader with ours in spec. And Loader is almost SourceFileLoader, but we need to run our transformations in source_to_code method:

from importlib.machinery import PathFinder, SourceFileLoader


class Finder(PathFinder):
    @classmethod
    def find_spec(cls, fullname, path=None, target=None):
        spec = super(Finder, cls).find_spec(fullname, path, target)
        if spec is None:
            return None

        spec.loader = Loader(spec.loader.name, spec.loader.path)
        return spec


class Loader(SourceFileLoader):
    def source_to_code(self, data, path, *, _optimize=-1):
        tree = ast.parse(data)
        tree = transform(tree)
        return compile(tree, path, 'exec',
                       dont_inherit=True, optimize=_optimize)

Then we need to put our finder in sys.meta_path:

import sys

def setup():
    sys.meta_path.insert(0, Finder)
    
setup()

And now we can just import modules that use transformers. But it requires some bootstrapping.

We can make it easier by creating __main__ module that will register module finder and run module or file:

from runpy import run_module
from pathlib import Path
import sys
from . import setup

setup()

del sys.argv[0]

if sys.argv[0] == '-m':
    del sys.argv[0]
    run_module(sys.argv[0])
else:
    # rnupy.run_path ignores meta_path for first import
    path = Path(sys.argv[0]).parent.as_posix()
    module_name = Path(sys.argv[0]).name[:-3]
    sys.path.insert(0, path)
    run_module(module_name)

So now we can run our module easily:

➜ python -m __transformers__ -m test   
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]

➜ python -m __transformers__ test.py                 
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]

And that’s all, you can try transformers by yourself with transformers package:

pip install transformers

Source code on github, package, previous part.

Partial application and piping with AST transformation



In the previous article I wrote about how to implement partial application and piping using operator overloading and decorators. But we can use a bit different approach – AST transformation.

For example we have code:

def add(x, y):
    return x + y
    
    
addFive = add(..., 5)

print(addFive(10))

We can look to AST of this code using ast module from standard library and dump function from gist:

import ast

code = open('src.py')  # the previous code
tree = ast.parse(code)
print(dump(tree))

It would be like:

Module(body=[
    FunctionDef(name='add', args=arguments(args=[
        arg(arg='x', annotation=None),
        arg(arg='y', annotation=None),
      ], vararg=None, kwonlyargs=[], kw_defaults=[], kwarg=None, defaults=[]), body=[
        Return(value=BinOp(left=Name(id='x', ctx=Load()), op=Add(), right=Name(id='y', ctx=Load()))),
      ], decorator_list=[], returns=None),
    Assign(targets=[
        Name(id='addFive', ctx=Store()),
      ], value=Call(func=Name(id='add', ctx=Load()), args=[
        Ellipsis(),
        Num(n=5),
      ], keywords=[])),
    Expr(value=Call(func=Name(id='print', ctx=Load()), args=[
        Call(func=Name(id='addFive', ctx=Load()), args=[
            Num(n=10),
          ], keywords=[]),
      ], keywords=[])),
  ])

And we can easily spot call with ellipsis argument:

Call(func=Name(id='add', ctx=Load()), args=[
    Ellipsis(),
    Num(n=5),
  ], keywords=[])

We need to wrap each call with ellipsis in lambda and replace ... with the lambda’s argument. We can do it with ast.NodeTransformer. It calls visit_Call method for each Call node:

class EllipsisPartialTransform(ast.NodeTransformer):
    def __init__(self):
        self._counter = 0

    def _get_arg_name(self):
        """Return unique argument name for lambda."""
        try:
            return '__ellipsis_partial_arg_{}'.format(self._counter)
        finally:
            self._counter += 1

    def _is_ellipsis(self, arg):
        return isinstance(arg, ast.Ellipsis)

    def _replace_argument(self, node, arg_name):
        """Replace ellipsis with argument."""
        replacement = ast.Name(id=arg_name,
                               ctx=ast.Load())
        node.args = [replacement if self._is_ellipsis(arg) else arg
                     for arg in node.args]
        return node

    def _wrap_in_lambda(self, node):
        """Wrap call in lambda and replace ellipsis with argument."""
        arg_name = self._get_arg_name()
        node = self._replace_argument(node, arg_name)
        return ast.Lambda(
            args=ast.arguments(args=[ast.arg(arg=arg_name, annotation=None)],
                               vararg=None, kwonlyargs=[], kw_defaults=[],
                               kwarg=None, defaults=[]),
            body=node)

    def visit_Call(self, node):
        if any(self._is_ellipsis(arg) for arg in node.args):
            node = self._wrap_in_lambda(node)
            node = ast.fix_missing_locations(node)

        return self.generic_visit(node)

So now we can transform AST with visit method and dump result:

tree = EllipsisPartialTransform().visit(tree)
print(dump(tree))

And you can see changes:

Module(body=[
    FunctionDef(name='add', args=arguments(args=[
        arg(arg='x', annotation=None),
        arg(arg='y', annotation=None),
      ], vararg=None, kwonlyargs=[], kw_defaults=[], kwarg=None, defaults=[]), body=[
        Return(value=BinOp(left=Name(id='x', ctx=Load()), op=Add(), right=Name(id='y', ctx=Load()))),
      ], decorator_list=[], returns=None),
    Assign(targets=[
        Name(id='addFive', ctx=Store()),
      ], value=Lambda(args=arguments(args=[
        arg(arg='__ellipsis_partial_arg_0', annotation=None),
      ], vararg=None, kwonlyargs=[], kw_defaults=[], kwarg=None, defaults=[]), body=Call(func=Name(id='add', ctx=Load()), args=[
        Num(n=5),
        Name(id='__ellipsis_partial_arg_0', ctx=Load()),
      ], keywords=[]))),
    Expr(value=Call(func=Name(id='print', ctx=Load()), args=[
        Call(func=Name(id='addFive', ctx=Load()), args=[
            Num(n=10),
          ], keywords=[]),
      ], keywords=[])),
  ])

AST is not easy to read, so we can use astunparse for transforming it to source code:

from astunparse import unparse

print(unparse(tree))

Result is a bit ugly, but more readable than AST:

def add(x, y):
    return (x + y)
addFive = (lambda __ellipsis_partial_arg_0: add(5, __ellipsis_partial_arg_0))
print(addFive(10))

For testing result we can compile AST and run it:

exec(compile(tree, '<string>', 'exec'))
#  15

And it’s working! Back to piping, for example we have code:

"hello world" @ str.upper @ print

It’s AST would be:

Module(body=[
    Expr(value=BinOp(left=BinOp(left=Str(s='hello world'),
                     op=MatMult(),
                     right=Attribute(value=Name(id='str', ctx=Load()), attr='upper', ctx=Load())), 
                                     op=MatMult(),
                                     right=Name(id='print', ctx=Load()))),
  ])

BinOp with op=MatMult() is place where we use matrix multiplication operator. We need to transform it to call of right part with left part as an argument:

class MatMulPipeTransformation(ast.NodeTransformer):
    def _replace_with_call(self, node):
        """Call right part of operation with left part as an argument."""
        return ast.Call(func=node.right, args=[node.left], keywords=[])

    def visit_BinOp(self, node):
        if isinstance(node.op, ast.MatMult):
            node = self._replace_with_call(node)
            node = ast.fix_missing_locations(node)

        return self.generic_visit(node)

Transformed AST would be:

Module(body=[
    Expr(value=Call(func=Name(id='print', ctx=Load()), args=[
        Call(func=Attribute(value=Name(id='str', ctx=Load()), attr='upper', ctx=Load()), args=[
            Str(s='hello world'),
          ], keywords=[]),
      ], keywords=[])),
  ])

And result code is just a nested calls:

print(str.upper('hello world'))
#  HELLO WORLD

So now it’s time to combine both transformers. For example we have code:

from functools import reduce
import operator

range(100) @ filter(lambda x: x % 2 == 0, ...) \
           @ map(lambda x: x ** 2, ...) \
           @ zip(..., range(200, 250)) \
           @ map(sum, ...) \
           @ reduce(operator.add, ...) \
           @ str.format('result: {}', ...) \
           @ str.upper \
           @ print

We can transform and run it with:

code = open('src.py')  # the previous code
tree = ast.parse(code)

tree = MatMulPipeTransformation().visit(
    EllipsisPartialTransform().visit(tree))
    
exec(compile(tree, '<string>', 'exec'))

It’s working, output as expected is:

RESULT: 172925

However result code is a bit messy:

from functools import reduce
import operator
print(str.upper((lambda __ellipsis_partial_arg_5: str.format('result: {}', __ellipsis_partial_arg_5))(
    (lambda __ellipsis_partial_arg_4: reduce(operator.add, __ellipsis_partial_arg_4))(
        (lambda __ellipsis_partial_arg_3: map(sum, __ellipsis_partial_arg_3))(
            (lambda __ellipsis_partial_arg_2: zip(__ellipsis_partial_arg_2, range(200, 250)))(
                (lambda __ellipsis_partial_arg_1: map((lambda x: (x ** 2)), __ellipsis_partial_arg_1))(
                    (lambda __ellipsis_partial_arg_0: filter((lambda x: ((x % 2) == 0)), __ellipsis_partial_arg_0))(
                        range(100)))))))))

This approach is better then previous, we don’t need to manually wrap all functions with ellipsis_partial or use _ helper. Also we don’t use custom Partial. But with this approach we need to manually transform AST, so in the next part I’ll show how we can do it automatically with module finder/loader.

Gist with sources, previous part, next part.

Partial application and piping with ... and @



In newer versions of Python we have two not much used features: ellipsis:

>>> print(...)
Ellipsis

And matrix multiplication operator:

class Dummy(str):
    def __matmul__(self, other):
        print('{}@{}'.format(self, other))


>>> Dummy('ok') @ 'there'
ok@there

So let’s start with ..., in Scala we can partially apply (or curry) function with _:

def add(x: Int, y: Int) = x + y
val addOne = add(1, _: Int)
addOne(5)  6: Int

Wouldn’t it be nice to have similar option in Python, like:

def add(x, y):
    return x + y
    
addFive = add(..., 5)

And we can easily implement it with some decorator:

from functool import wraps

class Partial:
    def __init__(self, fn, args, kwargs):
        self._fn = fn
        self._args = args
        self._kwargs = kwargs

    def __call__(self, replacement):
        args = [replacement if arg is ... else arg
                for arg in self._args]
        kwargs = {key: replacement if val is ... else val
                  for key, val in self._kwargs.items()}
        return self._fn(*args, **kwargs)

    def __repr__(self):
        return '<Partial: {}(*{}, **{})>'.format(
            self._fn.__name__, repr(self._args), repr(self._kwargs))


def ellipsis_partial(fn):
    @wraps(fn)
    def wrapper(*args, **kwargs):
        ellipsises = (list(args) + list(kwargs.values())).count(...)
        if ellipsises > 1:
            raise TypeError('Only one ... allowed as an argument.')
        elif ellipsises:
            return Partial(fn, args, kwargs)
        else:
            return fn(*args, **kwargs)

    return wrapper

So here if we find ... in arguments, we return Partial object. And when the object called – we replace ... with passed value. In action:

@ellipsis_partial
def add(x, y):
    return x + y


addFive = add(5, ...)
>>> addFive(10)
15

And it works! So back to matrix multiplication operator. In F# there’s nice piping operators:

> [1..10] |> List.filter (fun x -> x % 3 = 0)
val it : int list = [3; 6; 9]

So with our operator in Python it should look like:

range(1, 10) @ filter(lambda x: x % 3 == 0, ...)

And we can easily implement it just by adding __rmatmul__ to Partial:

class Partial:
    def __init__(self, fn, args, kwargs):
        self._fn = fn
        self._args = args
        self._kwargs = kwargs

    def __call__(self, replacement):
        args = [replacement if arg is ... else arg
                for arg in self._args]
        kwargs = {key: replacement if val is ... else val
                  for key, val in self._kwargs.items()}
        return self._fn(*args, **kwargs)

    def __rmatmul__(self, replacement):
        return self(replacement)

    def __repr__(self):
        return '<Partial: {}(*{}, **{})>'.format(
            self._fn.__name__, repr(self._args), repr(self._kwargs))

And in action:

filter = ellipsis_partial(filter)
to_list = ellipsis_partial(list)
>>> range(1, 10) @ filter(lambda x: x % 3 == 0, ...) @ to_list(...)
[3, 6, 9]

And we can use it in even more complex cases:

map = ellipsis_partial(map)
join = ellipsis_partial(str.join)
>>> range(1, 10) @ map(lambda x: x + 4, ...) \
                 @ filter(lambda x: x % 3 == 0, ...) \
                 @ map(str, ...) \
                 @ join(', ', ...)
6, 9, 12

But it’s a bit not nice to wrap all callables in ellipsis_partial, we can use some hacks with inspect or module loaders to doing it automatically, but it’s too magic for me. So we can add little function that wrap and call:

def _(fn, *args, **kwargs):
    return ellipsis_partial(fn)(*args, **kwargs)

Usage:

from functools import reduce
>>> range(1, 10) @ map(lambda x: x + 4, ...) \
                 @ filter(lambda x: x % 3 == 0, ...) \
                 @ _(reduce, lambda x, y: x * y, ...) \
                 @ _('value: {}'.format, ...)
value: 648

However it may look strange and unpythonic, but I guess it would be nice to see something like this in future Python releases.

Gist with sources, next part.

Abusing annotations with dependency injection



Python 3 has a nice feature – type annotations:

def add(x: int, y: int) -> int:
    return x + y

That can be used by IDEs and stuff like mypy for type checking. However we can easily access it:

>>> add.__annotations__
{'return': int, 'x': int, 'y': int}

And use it for things like dependency injection. For example we have a web app:

def get_db_connection() -> abc.DBConnection:
    ...


def get_routes() -> abc.Router:
    ...


def get_cache(db: abc.DBConnection) -> abc.CacheManager:
    ...


def init_app():
    db = get_db_connection()
    routes = get_routes()
    cache = get_cache(db)
    app = Application(routes=routes,
                      db=db,
                      cache=cache)
    app.run()


if __name__ == '__main__':
    init_app()

Looks a bit Java-like with interfaces (abstract classes, abc), but it’s useful in huge apps. However components are tightly coupled, and we need to use monkey patching for testing it.

Let’s examine annotations:

>>> get_cache.__annotations__
{'db': abc.DBConnection, 'return': abc.CacheManager}

We can see that the function requires abc.DBConnection and provides abc.CacheManager. We need to track all functions like this, it’ll be easy with some decorator:

from weakref import WeakValueDictionary

_provides = WeakValueDictionary()


def provides(fn):
    """Register function that provides something."""
    try:
        _provides[fn.__annotations__['return']] = fn
    except KeyError:
        raise ValueError('Function not annotated.')

    return fn

We use WeakValueDictionary in case function somehow can be deleted.

Let’s apply this decorator:

@provides
def get_db_connection() -> abc.DBConnection:
    ...


@provides
def get_routes() -> abc.Router:
    ...


@provides
def get_cache(*, db: abc.DBConnection) -> abc.CacheManager:
    ...

And move dependencies of main function to arguments:

def init_app(*, routes: abc.Router,
             db: abc.DBConnection,
             cache: abc.CacheManager):
    app = Application(routes=routes,
                      db=db,
                      cache=cache)
    app.run()

So we can think about our functions as a graph:

graph TB A[init_app]---B[get_routes] A---C[get_db_connection] A---D[get_cache] D---C

And we can easily write injector that resolve and inject dependencies:

class Injector:
    """Resolve and inject dependencies."""

    def __init__(self):
        self._resolved = {}

    def _get_value(self, name):
        """Get dependency by name (type)."""
        if name not in _provides:
            raise ValueError("Dependency {} not registered.".format(
                name))
        
        if name not in self._resolved:           
            fn = _provides[name]
            kwargs = self._get_dependencies(fn)
            return fn(**kwargs)
        return self._resolved[name]

    def _get_dependencies(self, fn):
        """Get dependencies for function."""
        return {key: self._get_value(value)
                for key, value in fn.__annotations__.items()
                if key != 'return'}

    def run(self, fn):
        """Resolve dependencies and run function."""
        kwargs = self._get_dependencies(fn)
        return fn(**kwargs)

So we can make our app work by adding:

if __name__ == '__main__':
    Injector().run(init_app)

Although this approach is simple and straightforward, it’s overkill for most of apps.

Package on github.

Rerenderer rendering performance



Rerenderer is a React-like library for cross platform drawing on canvas. And we experimenting a lot with ways to improve performance of rendering. Not so while ago I wrote about intermediate language and interpreter, this approach was interesting, but not so much efficient. Performing optimizations on that language wasn’t enough fast and interpreter on Android was based on reflection and was a bit slow because of that.

So now we decided to sacrifice flexibility and move component implementation to host platforms (we use Kotlin for Android and ClojureScript for browsers). And use simpler approach with canvases tree (one canvas for each component), where we rerender only changed canvases and ancestors. So for example we have a simple app:

(ns rerenderer.example.core
  (:require [rerenderer.core :refer [init!]]
            [rerenderer.primitives :refer [rectangle text]]
            [rerenderer.debug :refer [swap-state!]]))

(defn labeled
  [{:keys [label width] :as options} & children]
  (rectangle options
    children
    (text {:x (- width 50) :y 0
           :width 40 :height 40
           :font-size 36
           :color "#3E454C"
           :value label})))

(defn root
  [{:keys [background-color first-color second-color third-color]}]
  (labeled {:x 0 :y 0
            :width 800 :height 400
            :color background-color
            :label "A"}
    (labeled {:x 30 :y 30
              :width 200 :height 200
              :color first-color
              :label "B"})
    (labeled {:x 100 :y 100
              :width 500 :height 250
              :color second-color
              :label "C"}
      (labeled {:x 400 :y 150
                :width 100 :height 100
                :color third-color
                :label "D"}))))

(def initial-state
  {:background-color "#FFF6E5"
   :first-color "#7ECEFD"
   :second-color "#FF7F66"
   :third-color "#2185C5"})

(defonce app (init! :root-view #'root
                    :state initial-state
                    :canvas (.getElementById js/document "canvas")
                    :width 800
                    :height 600))

Whole scene can be represented as a tree:

graph TB A[A]-->B[B] A-->C[C] C-->D[D]

So when we change D:

(swap-state! app assoc :third-color :red)

We rerender D, C and A, but don’t touch B:

graph TB A[A]-->B[B] A-->C[C] C-->D[D] style D fill:red; style C stroke:red; style A stroke:red;

But when we change C:

(swap-state! app assoc :second-color :white)

We rerender only C, A and don’t touch B and D. Because changes doesn’t affect it canvases:

graph TB A[A]-->B[B] A-->C[C] C-->D[D] style C fill:red; style A stroke:red;

You can easily notice performance boost in browser on more complex example – Conway’s Game of Life:

And on Android:

It’s faster then before, but you can notice that it’s not smooth, now we have a little problem with GC.