Maurice Herlihy: Write Great Code, Volume 1



book cover Recently, for studying, I was sometimes required to use low-level stuff like C, so I decided to read something about low-level things and chose to read Write Great Code, Volume 1 by Randall Hyde. The book is a bit too low-level, it even explains how processors of different architectures work inside, even on transistors level. Also, this book contains interesting information about how memory works on different levels and how processor uses different types of memory. Chapters about primitive data structures aren’t so interesting, but they give basic knowledge of how things work inside.

Although this book can look a bit outdated, especially in the chapter about IO, it’s interesting to read and somehow it’s even entertaining.

Simplify complex React components with generators



Sometimes writing complex React components, like huge dynamic forms, isn’t easy. By default for control flow in JSX we use conditional operator, so, for example, a complex form with some logic will look like:

class Form extends Component {
  propTypes: {
    importedAccount: boolean,
    corporateClient: boolean,
  };

  render() {
    return (
      <div>
        <form>
          {!this.props.importedAccount ? (
              <label>
                First Name
                <input type="text"/>
              </label>
            ) : null}
          {!this.props.importedAccount ? (
              <label>
                First Name
                <input type="text"/>
              </label>
            ) : null}
          {this.props.corporateClient ? (
              <label>
                Organization
                <input type="text"/>
              </label>
            ) : null}
          {this.props.corporateClient ? (
              <label>
                Role
                <input type="text"/>
              </label>
            ) : null}
          {!this.props.corporateClient ? (
              <label>
                Billing address
                <input type="text"/>
              </label>
            ) : null}
        </form>
      </div>
    );
  }
}

But that much of ternary operators looks ugly. And explicitly writing ... ? ... : null isn’t fun at all. So we can try a bit different approach, we can extract all these inputs to separate methods, like:

class Form extends Component {
  propTypes: {
    importedAccount: boolean,
    corporateClient: boolean,
  };

  renderFirstNameInput() {
    if (!this.props.importedAccount) {
      return (
        <label>
          First Name
          <input type="text"/>
        </label>
      );
    }
  }

  renderLastNameInput() {
    if (!this.props.importedAccount) {
      return (
        <label>
          First Name
          <input type="text"/>
        </label>
      );
    }
  }

  renderOrganizationInput() {
    if (this.props.corporateClient) {
      return (
        <label>
          Organization
          <input type="text"/>
        </label>
      );
    }
  }

  renderRoleInput() {
    if (this.props.corporateClient) {
      return (
        <label>
          Role
          <input type="text"/>
        </label>
      );
    }
  }

  renderBillingAddressInput() {
    if (!this.props.corporateClient) {
      return (
        <label>
          Billing address
          <input type="text"/>
        </label>
      );
    }
  }

  render() {
    return (
      <div>
        <form>
          {this.renderFirstNameInput()}
          {this.renderLastNameInput()}
          {this.renderOrganizationInput()}
          {this.renderRoleInput()}
          {this.renderBillingAddressInput()}
        </form>
      </div>
    );
  }
}

Looks much nicer, but it’s a bit bloated, there’s too much code. Another approach is to create some generator method and convert it to an array in render method, like:

class Form extends Component {
  propTypes: {
    importedAccount: boolean,
    corporateClient: boolean,
  };

  *renderForm() {
    if (!this.props.importedAccount) {
      yield (
        <label key="first-name">
          First Name
          <input type="text"/>
        </label>
      );
      yield (
        <label key="last-name">
          Last Name
          <input type="text"/>
        </label>
      );
    }

    if (this.props.corporateClient) {
      yield (
        <label key="organization">
          Organization
          <input type="text"/>
        </label>
      );
      yield (
        <label key="role">
          Role
          <input type="text"/>
        </label>
      );
    } else {
      yield (
        <label key="billing-address">
          Billing address
          <input type="text"/>
        </label>
      );
    }
  }

  render() {
    return (
      <div>
        <form>
          {[...this.renderForm()]}
        </form>
      </div>
    );
  }
}

So it looks much nicer, there’s far way less code than in methods approach, it looks less hackish than conditional operator approach. And we finally can have easily readable control flow.

Testing React Native components without enzyme



In React world enzyme is very popular, but it works poorly with react-native and requires some ugly mocks.

So I thought it would be easier to test components without it. First of all, React offers us react-test-renderer, that can render components to JSON:

import { View, Text, Button } from 'react-native';
import ReactTestRenderer from 'react-test-renderer';

const rendered = ReactTestRenderer.create(
  <View>
    <Text>Hello</Text>
    <Button title="OK" onPress={() => console.log('OK')}/>
  </View>
).toJSON();

console.log(rendered);

As a result, we got some big object:

{
  "type": "View",
  "props": {},
  "children": [
    {
      "type": "Text",
      "props": {
        "accessible": true,
        "allowFontScaling": true,
        "ellipsizeMode": "tail"
      },
      "children": [
        "Hello"
      ]
    },
    {
      "type": "View",
      "props": {
        "accessible": true,
        "accessibilityComponentType": "button",
        "accessibilityTraits": [
          "button"
        ],
        "style": {
          "opacity": 1
        },
        "isTVSelectable": true
      },
      "children": [
        {
          "type": "View",
          "props": {
            "style": [
              {}
            ]
          },
          "children": [
            {
              "type": "Text",
              "props": {
                "style": [
                  {
                    "color": "#0C42FD",
                    "textAlign": "center",
                    "padding": 8,
                    "fontSize": 18
                  }
                ],
                "accessible": true,
                "allowFontScaling": true,
                "ellipsizeMode": "tail"
              },
              "children": [
                "OK"
              ]
            }
          ]
        }
      ]
    }
  ]
}

It’s already not too hard to find children components which you want to test, but I prefer using a little utility function:

import { flatMap } from "lodash";

const query = (node, match) => {
  let result = [];
  let notProcessed = [node];

  while (notProcessed.length) {
    result = [...result, ...notProcessed.filter(match)];
    notProcessed = flatMap(notProcessed, ({children}) => children || []);
  }

  return result;
};

With which it’s easy to find, for example, all Text nodes:

query(rendered, ({type}) => type === 'Text');

[
  {
    type: 'Text',
    props: {
      accessible: true,
      allowFontScaling: true,
      ellipsizeMode: 'tail'
    },
    children: ['Hello']
  },
  {
    type: 'Text',
    props: {
      style: [Object],
      accessible: true,
      allowFontScaling: true,
      ellipsizeMode: 'tail'
    },
    children: ['OK']
  }
]

You can notice that we have Text node for our Button, it’s because of underlying realization of Button component, if we don’t want to see insides of some component, we can easily mock it:

jest.mock('Button', () => 'Button');
query(rendered, ({type}) => type === 'Button');

[{
  type: 'Button',
  props: {title: 'OK', onPress: [Function: onPress]},
  children: null
}];

Enzyme has a useful method simulate, instead of it, we can just call callbacks properties, like onPress on out button:

query(rendered, ({type}) => type === 'Button')[0].onPress();

// OK

It’s harder when you need to pass an event object to a callback, but it always can be mocked.

Finding leaking tests with pytest



On one project we had a problem with leaking tests, and problem was so huge that some tests was leaking even for a few GB. We tried pytest-leaks, but it was a bit overkill and didn’t worked with our python version. So we wrote a little leak detector by ourselves.

First of all we got consumed RAM with psutil:

import os
from psutil import Process

_proc = Process(os.getpid())


def get_consumed_ram():
    return _proc.memory_info().rss

Then created some log of ram usage, where nodeid is a special pytest representation of test, like tests/test_service.py::TestRemoteService::test_connection:

from collections import namedtuple

START = 'START'
END = 'END'
ConsumedRamLogEntry = namedtuple('ConsumedRamLogEntry', ('nodeid', 'on', 'consumed_ram'))
consumed_ram_log = []

And logged ram usage from pytest hooks, which we just put in conftest.py:

def pytest_runtest_setup(item):
    log_entry = ConsumedRamLogEntry(item.nodeid, START, get_consumed_ram())
    consumed_ram_log.append(log_entry)


def pytest_runtest_teardown(item):
    log_entry = ConsumedRamLogEntry(item.nodeid, END, get_consumed_ram())
    consumed_ram_log.append(log_entry)

Pytest calls pytest_runtest_setup before each test, and pytest_runtest_teardown after.

And after all tests we print information about tests leaked more than allowed (10MB in our case) from pytest_terminal_summary hook:

from itertools import groupby

LEAK_LIMIT = 10 * 1024 * 1024


def pytest_terminal_summary(terminalreporter):
    grouped = groupby(consumed_ram_log, lambda entry: entry.nodeid)
    for nodeid, (start_entry, end_entry) in grouped:
        leaked = end_entry.consumed_ram - start_entry.consumed_ram
        if leaked > LEAK_LIMIT:
            terminalreporter.write('LEAKED {}MB in {}\n'.format(
                leaked / 1024 / 1024, nodeid))

So after running tests we got our leaking tests, like:

LEAKED 712MB in tests/test_service.py::TestRemoteService::test_connection

App for using your phone as an Apple-like Touch Bar



screenshot

The idea of MacBook Touch Bar looks interesting, having custom controls for opened apps like a button for running tests when you use IDE or player controls when you watching movie. But it’s only available on new MacBook Pro. So I thought that would be nice to have something similar, but with phone instead of Touch Bar. Also I thought that it would be nice to make it easily extensible.

TLDR PhoneTouch, it’s very experimental, and by this time only linux supported (but you can easily add support of other OS). You can install and run it with:

npm install -g phone-touch
phone-touch

Also you need to install mobile app on Android from apk, or build it manually if you have an iOS device.

So how it works? If very simplified, app just watches for events on a desktop (like switching windows, etc) and remote-renders JSX components on a phone.

graph LR A[Desktop app]-- WebSocket ---B[Mobile app]

Desktop app

Let’s start with desktop app, because it’s more interesting.

graph LR A[Data Source]-- Emit new data -->B[Rules] B-- Controls -->C[WS server] C-- Event from client -->D[Callbacks registry] B-- Callbacks from controls -->D

Data sources

The first interesting thing in the desktop app are data sources.

graph LR A[xdotool]-- Current window -->B[Aggregated data source] C[pulseaudio]-- Volume -->B D[playerctl]-- Current song -->B

Data sources are special functions (interval, callback) -> void, which calls callback every interval ms with current value. So, for example, data source that gets current window title, pid and executable path looks like:

import { exec } from 'child_process';

const getExecutable = (pid, callback) =>
  exec(`readlink -f /proc/${pid}/exe`, (error, stdout, stderr) =>
    callback(stdout.split('\n')[0])
  );

const getWindow = (callback) =>
  exec('xdotool getwindowfocus getwindowname getwindowpid',
    (error, stdout, stderr) => {
      const [title, pid, ..._] = stdout.split('\n');
      getExecutable(pid, (executable) => callback({title, pid, executable}));
    });

export default (interval, callback) => exec('xdotool -h', (error) => {
  if (error) {
    return;
  }

  setInterval(
    () => getWindow((window) => callback({window})),
    interval);
});

Every interval ms it gets current window by calling xdotool, than gets executable and calls callback({window: {title, pid, executable}). It’s a bit not efficient, but it works.

Data sources results we use in another significant part – rules.

Rules

graph LR A[chrome]-- Chrome controls -->B[Aggregated controls] C[idea]-- IntelliJ IDEA controls -->B D[netflix]-- Netflix controls -->B E[player]-- Audio player controls -->B F[pulseaudio]-- Pulseaudio volume controls -->B G[VLC]-- VLC controls -->B

Rules are functions ({data-sources}) -> control?. Let’s look to a rule that shows controls for VLC:

import controls, { View, TouchableHighlight, Icon, Text } from '../controls';
import { sendKey } from '../utils';

const styles = {
  title: {color: '#ffffff', fontSize: 10},
  controlsHolder: {flexDirection: 'row'},
  control: {color: '#ffffff', fontSize: 60}
};

export default ({window}) => {
  if (window.title.search('VLC media player') === -1) {
    return;
  }

  return (
    <View key="vlc-group">
      <Text style={styles.title}
            key="vlc-title">VLC</Text>
      <View style={styles.controlsHolder}
            key="vlc-icons">
        <TouchableHighlight onPress={() => sendKey('ctrl+Left')}
                            key="vlc-rewind">
          <Icon style={styles.control} name="rotate-left"/>
        </TouchableHighlight>
        <TouchableHighlight onPress={() => sendKey('space')}
                            key="vlc-play">
          <Icon style={styles.control} name="play-arrow"/>
        </TouchableHighlight>
        <TouchableHighlight onPress={() => sendKey('ctrl+Right')}
                            key="vlc-fast-forward">
          <Icon style={styles.control} name="rotate-right"/>
        </TouchableHighlight>
        <TouchableHighlight onPress={() => sendKey('f')}
                            key="vlc-fullscreen">
          <Icon style={styles.control} name="fullscreen"/>
        </TouchableHighlight>
        <TouchableHighlight onPress={() => sendKey('m')}
                            key="vlc-mute">
          <Icon style={styles.control} name="volume-mute"/>
        </TouchableHighlight>
      </View>
    </View>
  );
};

You can notice that we use JSX with React Native components (and Icon from Vector Icons) here even with callbacks. But how it works? First of all we use "pragma": "controls" for transform-react-jsx plugin in .babelrc:

{
  "presets": ["es2015"],
  "plugins": [
    ["transform-react-jsx", {
      "pragma": "controls"
    }],
    "transform-object-rest-spread",
    "syntax-flow",
    "transform-flow-strip-types"
  ]
}

So code like:

<TouchableHighlight onPress={() => sendKey('m')}
                    key="vlc-mute">
  <Icon style={styles.control} name="volume-mute"/>
</TouchableHighlight>

We’ll be translated to:

controls(
  TouchableHighlight,
  {onPress: () => sendKey('m'), key='vlc-mute'},
  controls(Icon, {style: styles.control, name: 'volume-mute'})); 

Where TouchableHighlight is a string, we defined all supported components as strings in the desktop app, like:

export const View = 'View';
export const Image = 'Image';
export const Icon = 'Icon';
export const TouchableHighlight = 'TouchableHighlight';
export const Slider = 'Slider';
export const Text = 'Text';
export const Button = 'Button';

So controls function can serialize our control to:

{
  tag: 'TouchableHighlight',
  props: {
    onPress: {callbackId: '8c9084e97cb9412eaa0ea68cd658609b'},
    key='vlc-mute'
  },
  children: [{
    tag: 'Icon',
    props: {
      style: {color: '#ffffff', fontSize: 60},
      name: 'volume-mute'
    }
  }]
}

You can notice that we replaced function with {callbackId}, so we can serialize controls to json and send them to mobile app. All callback stored in a special registry, that we clean before controls update.

WebSocket server

The last part of the desktop app is a WebSocket server.

graph LR A[WS client]-- Callbacks calls -->B[WS server] B-- Controls -->A

It sends new controls to clients when data source emits new data and listens to events from clients then calls appropriate callbacks from registry.

Mobile app

Mobile app is just a plain simple React Native + Redux + Redux Thunk app.

graph LR A[WS client]-- Controls -->B[Action] B-- Controls -->C[Reducer] C-- Controls -->D[Component] D-- Callbacks calls -->E[Action] E-- Callbacks calls -->A

So the only interesting thing here is a rendering of controls, first of all we created object with all supported components:

import * as reactNativeComponents from 'react-native';
import Icon from 'react-native-vector-icons/MaterialIcons';

this._components = {Icon, ...reactNativeComponents};

And then we can recursively create components from controls received from the desktop app with React.createElement:

_prepareChildren(children) {
  if (!children)
    return null;

  children = children.map(this._renderControl);

  if (children.length === 1) {
    return children[0];
  } else {
    return children;
  }
}

_renderControl(control) {
  if (isString(control))
    return control;

  const component = this._components[control.tag];
  if (!component) {
    console.warn('Unexpected component type:', control);
    return;
  }

  const props = this._prepareProps(control.props);
  const children = this._prepareChildren(control.children);

  return React.createElement(component, props, children);
}

As long as we propagate callback to the desktop app, we need to process all props and wrap callbacks in a special functions, which would emit action for sending callbacks to the desktop app:

_prepareArg(arg) {
  try {
    JSON.stringify(arg);
    return arg;
  } catch (e) {
    return '';
  }
}

_callback(callback) {
  return (...args) => this.props.callbackCalled({
    args: args.map(this._prepareArg),
    ...callback
  })
}

_prepareProps(props) {
  if (!props)
    return {};

  for (const key in props) {
    if (isObject(props[key]) && props[key].callbackId) {
      props[key] = this._callback(props[key]);
    }
  }

  return props;
}

And that’s all. Summarizing everything we got an app that acts in a similar to Touch Bar way, but simplified a lot.

Sources on github.

Maurice Herlihy: The Art of Multiprocessor Programming



book cover A lot of times I was interested how parallel code works and how organized classic parallel data structures. So I decided to read The Art of Multiprocessor Programming by Maurice Herlihy and I read almost what I wanted to read. The books widely explains locks and other concurrent primitives, parallel data structures and some best practices. Although the book is a bit classbook-like, most parts aren’t boring to read. And as a good thing all chapters contains exercises.

As a drawback, this book contains information mostly only about mutable data structures.

Little app for learning Czech word's endings



czgramma screenshot

In the previous September I started learning Czech language, it’s interesting and not so hard, but a few things are really pain in the ass. One of them – word’s endings. They varies between declinations, kinds and forms. And there’s a lot of exceptions. So I decided to write a little app, that would grab random texts from internet, replace word’s endings with inputs and check correctness of values in inputs. And also I wanted make app as small as possible, ideally even server-less.

As a data source I chose Czech Wikipedia, it’s API has all I need:

We can easily try this API with curl:

# Get random article title:
➜ curl "https://cs.wikipedia.org/w/api.php?action=query&list=random&rnlimit=1&rnnamespace=0&format=json"
{"batchcomplete":"","continue":{"rncontinue":"0.126319054816|0.12632136624|1223411|0","continue":"-||"},"query":{"random":[{"id":1248403,"ns":0,"title":"Weinmannia"}]}}%

# Get article summary by title:
➜ curl "https://cs.wikipedia.org/w/api.php?action=query&prop=extracts&exintro=&explaintext=&titles=Weinmannia&format=json"
{"batchcomplete":"","query":{"pages":{"1248403":{"pageid":1248403,"ns":0,"title":"Weinmannia","extract":"Weinmannia je rod rostlin z \u010deledi kunoniovit\u00e9 (Cunoniaceae). Jsou to d\u0159eviny se zpe\u0159en\u00fdmi nebo jednoduch\u00fdmi vst\u0159\u00edcn\u00fdmi listy a drobn\u00fdmi kv\u011bty v klasovit\u00fdch nebo hroznovit\u00fdch kv\u011btenstv\u00edch. Rod zahrnuje asi 150 druh\u016f. Je roz\u0161\u00ed\u0159en zejm\u00e9na na ji\u017en\u00ed polokouli v m\u00edrn\u00e9m p\u00e1su a v tropick\u00fdch hor\u00e1ch. Vyskytuje se v Latinsk\u00e9 Americe, jihov\u00fdchodn\u00ed Asii, Tichomo\u0159\u00ed, Madagaskaru a Nov\u00e9m Z\u00e9landu.\nRostliny byly v minulosti t\u011b\u017eeny zejm\u00e9na jako zdroj t\u0159\u00edslovin a d\u0159eva k v\u00fdrob\u011b d\u0159ev\u011bn\u00e9ho uhl\u00ed. Maj\u00ed tak\u00e9 v\u00fdznam v domorod\u00e9 medic\u00edn\u011b."}}}}%

For frontend I used React with Redux and Material-UI. I didn’t wanted to mess with webpack/etc configs, so I just used Create React App. And it works pretty well.

Mostly implementation isn’t interesting, it’s just a standard react+redux app. But there was a few not so obvious problems.

The first was finding word’s endings, there’s no tools for nlp for Czech in JavaScript world. But the language has limited set of word’s endings, so hardcoding them and checking every long enough word works:

const ENDS = [
  'etem',
  'ého',
  ...
  'ě',
];

(word) => {
  if (word.length < 4) {
    return null;
  }

  for (const end of ENDS) {
    if (word.endsWith(end) && (word.length - end.length) > 3) {
      return end;
    }
  }

  return null;
}

Another problem was extracting words from text, regular expressions like (\w+) doesn’t work with words like křižovatka. And also I need to extract not only words, but a symbols before and after the word. So I used ugly regexp:

(part) => {
  const matched = part.match(/([.,\/#!$%\^&\*;:{}=\-_`~()]*)([^.,\/#!$%\^&\*;:{}=\-_`~()]*)([.,\/#!$%\^&\*;:{}=\-_`~()]*)/);
  return matched.slice(1);
}

The last non-obvious part was to make client side routing work properly on heroku (on page refresh, etc). But it was solved by just adding static.json with content like:

{
  "root": "build/",
  "clean_urls": false,
  "routes": {
    "/**": "index.html"
  },
  "https_only": true
}

And that’s all. Result app is a bit hackish and a bit ugly, but it’s useful, at least for me.

App, sources on github.

Soundlights with Raspberry Pi and NeoPixel Strip



About a month ago I bought some NeoPixel clone and decided to create a garland/soundlights for Christmas.

TLDR:

The first problem was to analysing of audio stream in real time, so I found a nice console audio visualizer – cava. And changed it a bit, now instead of showing nice looking bars through ncurses it just prints height of bars. It was done with a little patch:

diff --git a/cava.c b/cava.c
index 48482d6..13e7ce1 100644
--- a/cava.c
+++ b/cava.c
@@ -792,13 +792,6 @@ as of 0.4.0 all options are specified in config file, see in '/home/username/.co
                        f[i] = 0;
                }
 
-               #ifdef NCURSES
-               //output: start ncurses mode
-               if (om == 1 || om ==  2) {
-                       init_terminal_ncurses(color, bcolor, col, bgcol);
-                       get_terminal_dim_ncurses(&w, &h);
-               }
-               #endif
 
                if (om == 3) get_terminal_dim_noncurses(&w, &h);
 
diff --git a/output/raw.c b/output/raw.c
index b3b9d1e..16196b9 100644
--- a/output/raw.c
+++ b/output/raw.c
@@ -5,6 +5,13 @@
 
 int print_raw_out(int bars, int fp, int is_bin, int bit_format, int ascii_range, char bar_delim, char frame_delim, int f[200]) {
        int i;
+        for (i = 0; i <  bars; i++) {
+               uint8_t f8 = ((float)f[i] / 10000) * 255;
+               printf("%d ", f8);
+        }
+        printf("\n");
+        return 0;
+
 
        if (is_bin == 1){//binary
                if (bit_format == 16 ){//16bit:

Created special config where bars equals to count of leds on the strip:

[general]
bars = 60

[output]
bit_format = 8bit
method = raw
style = mono

And it works:

➜  cava git:(master) ✗ ./cava -p soundlights/cava_config
15 22 34 22 15 11 16 11 8 7 11 8 12 8 8 5 4 6 4 5 4 5 4 4 6 10 8 12 8 12 15 14 9 11 9 14 21 14 20 30 20 16 19 17 13 14 10 9 8 9 8 5 5 7 7 11 10 11 8 7 
25 38 58 38 25 18 28 20 14 17 26 18 21 14 16 13 9 10 8 11 7 9 11 9 14 21 21 20 15 23 26 24 17 23 17 24 37 24 35 52 35 28 32 30 22 24 18 16 14 16 16 11 10 15 12 18 18 19 14 14 
33 50 75 50 34 24 36 27 19 24 36 24 27 20 27 19 13 13 12 15 15 13 15 12 19 28 31 26 20 31 33 31 23 32 22 31 47 31 45 67 45 36 42 38 28 31 23 21 19 21 21 16 14 20 16 24 23 25 19 18 
39 58 87 58 47 32 41 32 22 29 44 29 31 24 34 24 19 15 15 18 20 15 18 15 22 34 38 30 24 37 39 36 28 38 26 36 55 36 52 78 52 42 48 44 33 36 26 24 22 24 25 19 17 23 19 28 27 29 22 21 

Next problem was converting of bars heights to colors, I made it simple as possible. Max bar height is 256, so I generated 256 colors with code found on Quora:

def _get_spaced_colors(n):
    max_value = 16581375
    interval = int(max_value / n)
    colors = [hex(i)[2:].zfill(6) for i in range(0, max_value, interval)]

    return [(int(i[:2], 16), int(i[2:4], 16), int(i[4:], 16)) for i in colors]

After that I was need to changing led colors from Raspberry Pi, first of all I connected the strip to the board. In documentation they say that the strip should be connected with some level converter and should use external power, but mine works just fine connected straight to Raspberry Pi:

  • ground → ground PIN;
  • power → 3.3V PIN;
  • logic → GPIO PIN 18.

As a software part I used Python library rpi_ws281x and created little app, that reads from stdin and changes leds colors:

import sys
from neopixel import Adafruit_NeoPixel, Color

# LED strip configuration:
LED_COUNT = 60  # Number of LED pixels.
LED_PIN = 18  # GPIO pin connected to the pixels (must support PWM!).
LED_FREQ_HZ = 800000  # LED signal frequency in hertz (usually 800khz)
LED_DMA = 5  # DMA channel to use for generating signal (try 5)
LED_BRIGHTNESS = 255  # Set to 0 for darkest and 255 for brightest
LED_INVERT = False # True to invert the signal (when using NPN transistor level shift)

# Colors:
COLORS_COUNT = 256
COLORS_OFFSET = 50


def _get_strip():
    strip = Adafruit_NeoPixel(
        LED_COUNT, LED_PIN, LED_FREQ_HZ, LED_DMA, LED_INVERT, LED_BRIGHTNESS)

    strip.begin()

    for i in range(strip.numPixels()):
        strip.setPixelColor(i, Color(0, 0, 0))

    strip.setBrightness(100)

    return strip


def _get_spaced_colors(n):
    max_value = 16581375
    interval = int(max_value / n)
    colors = [hex(i)[2:].zfill(6) for i in range(0, max_value, interval)]

    return [(int(i[:2], 16), int(i[2:4], 16), int(i[4:], 16)) for i in colors]


def _handle_stdin(colors, strip):
    while True:
        try:
            nums = map(int, sys.stdin.readline()[:-1].split())
            for i, num in enumerate(nums):
                strip.setPixelColor(i, Color(*colors[num]))

            strip.show()
        except Exception as e:
            print e


if __name__ == '__main__':
    _handle_stdin(_get_spaced_colors(COLORS_COUNT),
                  _get_strip())

For sending data to Raspberry Pi I just used ssh, like:

./cava -p soundlights/cava_config | ssh pi@retropie.local sudo python soundlights.py

But there was a problem with buffering, but unbuffer from expect solved it:

unbuffer ./cava -p soundlights/cava_config | ssh pi@retropie.local sudo python soundlights.py

And that’s all.

Sources on github.

Dave Fancher: The Book of F#



book cover Few months ago I decided to try something different, like different platform and different language. As I mostly work with Python/JS/JVM langs, I chose .NET and F#. So I read The Book of F# by Dave Fancher from Joy of Coding bundle. And it was interesting. In the book aspects of F# are nicely explained with examples. Also book covers non-straightforward parts of the language, and explains why it was made that way.

Although author of the book assumed that readers would be familiar with .NET platform and C#, it wasn’t a problem at all.

Site Reliability Engineering: How Google Runs Production Systems



book cover white Not so long ago I wanted to read something about DevOps and processes in real organisations. So I chose Site Reliability Engineering: How Google Runs Production Systems. And it nicely explains about deploy, failures recovery, support and other SRE aspects from engineering and management points of view. Also it interesting to read about problems and solutions of huge systems.

However the book is informative, but it’s a bit boring. And almost most of cases are Google specific or can be a problem only on a very large systems.