Bruce Tate, Fred Daoud, Jack Moffitt, Ian Dees: Seven More Languages in Seven Weeks



book cover white I read the previous book of this series (Seven Languages in Seven Weeks) about a year ago so now I finished reading Seven More Languages in Seven Weeks: Languages That Are Shaping the Future by Bruce Tate, Fred Daoud, Jack Moffitt, Ian Dees and I wasn’t disappointed. This books contains good description and explanation of core concepts of seven languages: Lua, Factor, Elixir, Elm, Julia, MiniKanren and Idris. Actually MiniKanren isn’t a general-purpose language, it’s a DSL, in this book used core.logic for Clojure.

For me most interesting was reading (and doing exercises) about Elm, MiniKanren, Factor and Idris. And I think I’ll try to use some ideas and patterns from this languages in my projects. Also in this book I’ve seen a cool concept of tables and meta tables in Lua, and I’ll probably try to write something in this language, maybe some little game with LÖVE.

Colin Jones: Mastering Clojure Macros



book cover white About a month ago I was recommended to read Mastering Clojure Macros: Write Cleaner, Faster, Smarter Code by Colin Jones. It’s a short book, and a few days ago I’ve finished it. And it’s a good book, it contains good examples of macros, explains how some macros from core and popular libraries (compojure, hiccup and etc) work, explains some pitfalls and best practices.

Also from this book I’ve learned about useful name-with-attributes macro from tools.macro, before that I’d always reinvented the wheel when I needed to create defsomething macro with docstrings and metadata support.

The Architecture of Open Source Applications, Volume II



book cover I read the first volume of this book about a year ago and I liked it, and now I’ve finished the second volume and wasn’t disappointed. Not all chapters of the book was interesting for me, but most of them — was. It was very interesting to read about architecture of Git, GHC, DLR, nginx, Processing.js, PyPy, SQLAlchemy, Yesod and ZeroMQ.

And in this book not only architecture explained, almost all chapters contains cool “Lessons Learned” section, where explained about right and wrong decisions made in projects. Also a lot of chapters contains history of projects, it’s interesting to read how and why projects evolved to the current state.

Async code without callbacks with CoffeeScript generators



Sometimes it’s very hard to understand code with a bunch of callbacks, even if it with promises. But in ES6 and in CoffeeScript 1.9 we got generators, so maybe we can avoid callbacks with them, and use something like tornado.gen?

And we can, let’s look at this little helper function:

gen = (fn) ->
  new Promise (resolve, reject) ->
    generator = fn()

    putInGenerator = (method) -> (val) ->
      try
        handlePromise generator[method](val)
      catch error
        reject error

    handlePromise = ({value, done}) ->
      if done
        resolve value
      else if value and value.then
        value.then putInGenerator('next'), putInGenerator('throw')
      else
        reject "Value isn't a promise!"

    handlePromise generator.next()

With it code like:

$http.get('/users/').then ({data}) ->
  doSomethingWithUsers data.users
  $http.get '/posts/'
, (err) ->
  console.log "Can't receive users", err
.then ({data}) ->
  doSomethingWithPosts data.posts
, (err) ->
  console.log "Can't receive posts", err

Can be transformed to something like:

gen ->
  try
    {data: usersData} = yield $http.get '/users/'
  catch err
    console.log "Can't receive users", err
    return
  doSomethingWithUsers usersData.users

  try
    {data: postsData} = yield $http.get '/posts/'
  catch err
    console.log "Can't receive posts", err
    return
  doSomethingWithPosts postsData.posts

Isn’t it cool? But more, result of gen is a promise, so we can write something like:

getUsers = (url) -> gen ->
  {data: {users}} = yield $http.get(url)
  users.map prepareUser

getPosts = (url) -> gen ->
  {data: {posts}} = yield $http.get(url)
  posts.map preparePosts

gen ->
  try
    users = yield getUsers '/users/'
    posts = yield getPosts '/posts/'
  catch err
    console.log "Something goes wrong", err
    return

   doSomethingWithUsers users
   doSomethingWithPosts posts

So, what gen do:

  1. Creates main promise, which will be returned from gen.
  2. Sends nothing to generator and receives first promise.
  3. If promise succeed, sends result of this promise to the generator. If failed — throws an error to the generator. If we got an exception during .next or .throw — rejects main promise with that exception.
  4. Receives new value from the generator, if the generator is done — resolves main promise with received value, if the value is a promise — repeats the third step, otherwise — rejects main promise.

Reinventing OOP with Clojure



From books all we know that main principles of OOP is polymorphism and encapsulation, but other meaning is that the significant aspect of OOP is a message passing. And in Clojure we have a cool library for dealing with messages – core.async. So we can build simple “object” with it, and we can use core.match for “parsing” messages in this “object”. Yep, there will be something like Erlang actors:

(require '[clojure.core.async :refer [go go-loop chan <! >! >!! <!!]])
(require '[clojure.core.match :refer [match]])

(def dog
  (let [messages (chan)]
    (go-loop []
      (match (<! messages)
        [:bark!] (println "Bark! Bark!")
        [:say! x] (println "Dog said:" x))
      (recur))
    messages))

Here I’ve just created channel and in the go-loop matched received messages from them with registered messages patterns.

Format of messages is [:name & args].

We can easily test dog object by putting message in the channel:

user=> (>!! dog [:bark!])
# Bark! Bark!

user=> (>!! dog [:say! "Hello world!"])

# Dog said: Hello world!

Looks awesome, but maybe we should add a state? It’s pretty simple:

(def stateful-dog
  (let [calls (chan)]
    (go-loop [state {:barked 0}]
      (recur (match (<! calls)
               [:bark!] (do (println "Bark! Bark!")
                            (update-in state [:barked]
                                       inc))
               [:how-many-barks?] (do (println (:barked state))
                                      state))))
    calls))

I’ve just put default state in the bindings for go-loop and recur it with new state after processing messages. And we can test it:

user=> (>!! stateful-dog [:bark!])
# Bark! Bark!

user=> (>!! stateful-dog [:how-many-barks?])
# 1

user=> (>!! stateful-dog [:bark!])
# Bark! Bark!

user=> (>!! stateful-dog [:bark!])
# Bark! Bark!

user=> (>!! stateful-dog [:how-many-barks?])
# 3

Great, but what if we want to receive result of the method? It’s simple too:

(def answering-dog
  (let [calls (chan)]
    (go-loop [state {:barked 0}]
      (recur (match (<! calls)
               [:bark! _] (do (println "Bark! Bark!")
                              (update-in state [:barked]
                                         inc))
               [:how-many-barks? result] (do (>! result (:barked state))
                                             state))))
    calls))

I’ve just set a channel as a last argument of the message and put result in it. It’s not that simple to use like previous examples, but it’s ok:

user=> (>!! answering-dog [:bark!  (chan)])
# Bark! Bark!

user=> (>!! answering-dog [:bark!  (chan)])
# Bark! Bark!

user=> (let [result (chan)]
  #_=>   (>!! answering-dog [:how-many-barks? result])
  #_=>   (<!! result))
2

Last call looks too complex, let’s add a few helpers to make it easier:

(defn call
  [obj & msg]
  (go (let [result (chan)]
        (>! obj (conj (vec msg) result))
        (<! result))))

(defn call!!
  [obj & msg]
  (<!! (apply call obj msg)))

call!! should be used only outside of go-block, call — in combination with <! and <!!. Let’s look to them in action:

user=> (call!! answering-dog :how-many-barks?)
2

user=> (<!! (call answering-dog :how-many-barks?))
2

user=> (call!! answering-dog :set-barks!)
# Exception in thread "async-dispatch-33" java.lang.IllegalArgumentException: No matching clause: [:set-barks!...

user=> (call!! answering-dog :how-many-barks?)
# ...

So now we have a problem, when error happens in a object – object dies and no longer sends responses to messages. So we should add try/except to all methods, better to use macros for automating that. But before we should define format of response:

  • [:ok val] – all ok;
  • [:error error-reason] – error happened;
  • [:none] – we can’t put just nil in a channel, so we’ll use this.

Yep, you can notice that this looks like Maybe/Option monad.

So let’s write macroses:

(defn ok! [ch val] (go (>! ch [:ok val])))

(defn error! [ch reason] (go (>! ch [:error reason])))

(defn none! [ch] (go (>! ch [:none])))

(defmacro object
  [default-state & body]
  (let [flat-body (mapcat macroexpand body)]
    `(let [calls# (chan)]
       (go-loop ~default-state
         (recur (match (<! calls#)
                  ~@flat-body
                  [& msg#] (do (error! (last msg#) [:method-not-found (first msg#)])
                               ~@(take-nth 2 default-state)))))
       calls#)))

(defmacro method
  [pattern & body]
  [pattern `(try (do ~@body)
                 (catch Exception e#
                   (error! ~(last pattern) e#)))])

Macro object can be used for creating objects and macro method — for defining methods inside the object. Here you could notice that [& msg#] works exactly like method_missing in Ruby.

So now we can create objects using this macroses:

(defn make-cat
  [name]
  (object [state {:age 10
                  :name name}]
    (method [:get-name result]
      (ok! result (:name state))
      state)
    (method [:set-name! new-name result]
      (none! result)
      (assoc state :name new-name))
    (method [:make-older! result]
      (error! result :not-implemented)
      state)))

(def cat (make-cat "Simon"))

We created object cat with methods get-name, set-name! and make-older!, make-cat is a improvised constructor. This object can be used like all previous objects, but in combination with core.match it’ll be more useful:

user=> (match (call!! cat :get-name)
  #_=>   [:ok val] (println val))
# Simon

user=> (match (call!! cat :set-name! "UltraSimon")
  #_=>   [:none] (println "Name changed"))
# Name changed

user=> (match (call!! cat :get-name)
  #_=>   [:ok val] (println val))
# UltraSimon

user=> (match (call!! cat :make-older!)
  #_=>   [:ok age] (println "Now - " age)
  #_=>   [:error reason] (println "Failed with " reason))
# Failed with  :not-implemented

user=> (match (call!! cat :i-don't-know-what)
  #_=>   [:error _] (println "Failed"))
# Failed

Looks perfect! But that’s not all, later I’ll implement a inheritance on top of this mess.

Serving static using nginx with docker



Imagine common situation when we have a container with a web application and a container with nginx, and we want to serve the web app static using nginx. Sounds very simple but actually it isn’t.

Before I start, source code of the simple web application which I want to run inside a container:

from flask import Flask, url_for

app = Flask(__name__)

@app.route("/")
def main():
    return '''<h1>Hello world!</h1>
              <img src='{}' />'''.format(url_for('static', filename='image.png'))

if __name__ == '__main__':
    app.run(host='0.0.0.0')

And also for proper work of this app there should be image in static/image.png.

Back to the task, the first idea – put static in a volume. So Dockerfile for the application should be like this:

FROM python:3.4
EXPOSE 5000
VOLUME static
COPY . .
RUN pip install flask
ENTRYPOINT python main.py

It’s simple to build and to test:

docker build -t example/app .
docker run -p 5000:5000 --name app example/app

After that you can visit localhost:5000 and ensure that app works.

Now go to the container for nginx. First of all we should write config for nginx:

upstream app_upstream {
  server app:5000;
}

server {
  listen 80;

  location /static {
    alias /static;
  }

  location / {
    proxy_pass  http://app_upstream;
  }
}

And Dockerfile for it:

FROM nginx:1.7
RUN rm /etc/nginx/conf.d/*
COPY app.conf /etc/nginx/conf.d/

So now we can build and run nginx:

docker build -t example/nginx .
docker run -p 8080:80 --link app:app --volumes-from app example/nginx

And it works! You can visit localhost:8080 for ensuring. But actually it don’t – it’ll be not that cool if we want to scale this web app. There will be one container with static and web app and also n containers with just a web app.

So, the second idea – create a data volume container with static. Dockerfile for it:

FROM ubuntu:14.04
VOLUME static
COPY . static

And now we should build it, run it and restart nginx container:

docker build -t example/static .
docker run --name static example/static
docker run -p 8080:80 --link app:app --volumes-from static example/nginx

This variant works great, but isn’t it too complex? Maybe there is a more simpler solution? And probably it exists.

And, the third idea – just cache static in nginx. So we should update nginx config to something like this:

proxy_cache_path /tmp/cache levels=1:2 keys_zone=cache:30m max_size=1G;

upstream app_upstream {
  server app:5000;
}

server {
  listen 80;

  location /static {
    proxy_cache cache;
    proxy_cache_valid 30m;
    proxy_pass  http://app_upstream;
  }

  location / {
    proxy_pass  http://app_upstream;
  }
}

And Dockerfile for nginx to:

FROM nginx:1.7
RUN mkdir /tmp/cache
RUN chown www-data /tmp/cache
RUN rm /etc/nginx/conf.d/*
COPY app.conf /etc/nginx/conf.d/

So now we can build it and run it:

docker build -t example/nginx .
docker run -p 8080:80 --link app:app example/nginx

It works well and it’s very simple, for my projects I’ve chosen that solution. But it has one drawback – for updating static we should wait 30 minutes or we should restart nginx container.

Jack Moffitt, Fred Daoud: Seven Web Frameworks in Seven Weeks



book cover white I’m a big fan of seven in seven books series of The Pragmatic Bookshelf, so I decided to read “Seven Web Frameworks in Seven Weeks: Adventures in Better Web Apps” by Jack Moffitt and Fred Daoud. And few days ago I finished, and for me this book been a very pleasant reading. It been very interesting to try untouched frameworks and examples from this book helped me a lot.

I worked before with Clojure Ring and AngularJS which were good examined in this book, so I can assume that other frameworks too. And after reading this book I think I definitely should try to write something using webmachine and Yesod.

CSP on pyboard with uasyncio



Not so far ago @pfalcon mentioned in microasync bugtracker about a port of asyncio for micropython – uasyncio. After that I ported asynchronous queue from asyncio to uasyncio, so now it can replace microasync.

So I conceived a little project: device which prints information from pyboard gyro sensor and ultrasonic sensor to OLED display. Sounds very simple, but it need to update information on display when data from one of sensors changed. So interaction with sensors should be non-blocking.

I found almost well done lib (fork which supports drawing text) for interacting with my OLED display and lib for working with ultrasonic sensor (non-blocking version).

First of all I created decorator for simplifying creating generators which returns a queue and make all interaction through it:

class OnlyChanged(Queue):
    def __init__(self, *args, **kwargs):
        self._last_val = None
        super().__init__(*args, **kwargs)

    def put(self, val):
        # Put in queue only if value changed
        if val != self._last_val:
            yield from super().put(val)
            self._last_val = val


def chan(fn):
    def wrapper(*args, **kwargs):
        q = OnlyChanged(1)
        get_event_loop().call_soon(fn(q, *args, **kwargs))
        return q
    return wrapper

So now it’s simple to write generator, which prints to display data received from the queue:

@chan
def get_display(q, *args, **kwargs):
    display = Display(*args, **kwargs)
    while True:
        lines = yield from q.get()
        display.write(lines)

>>> display = get_display(pinout={'sda': 'Y10',
...                               'scl': 'Y9'},
...                       height=64,
...                       external_vcc=False)
>>> yield from display.put('Hello world!')

oled display

And generator for the ultrasonic sensor which puts values to the queue:

@chan
def get_ultrasonic(q, *args, **kwargs):
    ultrasonic = Ultrasonic(*args, **kwargs)
    while True:
        val = yield from ultrasonic.distance_in_cm()
        yield from q.put(val)

>>> ultrasonic = get_ultrasonic('X1', 'X2')
>>> yield from ultrasonic.get()
28.012

Similar generator for the pyboard gyro sensor:

@chan
def get_gyro(q):
    accel = pyb.Accel()
    while True:
        val = accel.filtered_xyz()
        yield from q.put(val)

>>> gyro = get_gyro()
>>> yield from gyro.get()
(12, 9, 72)

And by combining all of them it’s very similar to write program for expected device:

def main():
    display = get_display(pinout={'sda': 'Y10',
                                  'scl': 'Y9'},
                          height=64,
                          external_vcc=False)
    current = {'dist': 0, 'xyz': (0, 0, 0)}
    shared_q = alts(dist=get_ultrasonic('X1', 'X2'),
                    xyz=get_gyro())
    while True:
        source, val = yield from shared_q.get()
        current[source] = val
        yield from display.put(
            'Distance: {:0.2f}cm\n'
            'x: {} y: {} z: {}'.format(current['dist'], *current['xyz']))


>>> loop = get_event_loop()
>>> loop.call_soon(main())
>>> loop.run_forever()

So the result code is very simple, all components are decoupled and it’s easy to test. Video of result:

Gist with source codes.

Getting total count of indexed documents in the GAE Search API



Around a month ago I was stuck with task – I had to get total count of indexed documents in the GAE Search API. Sounds simple, but not – this API doesn’t have a method for doing that, it has something similar – storage_usage, but this attribute contains size of index in bytes. But the API provides method for receiving ids of documents, so I’ve tried:

>>> len(index.get_range(ids_only=True))
100
>>> len(index.get_range(ids_only=True, limit=1000))
1000
>>> len(index.get_range(ids_only=True, limit=1001))
ValueError: limit, 1001 must be <= 1000

And another pitfall – get_range couldn’t return more than 1000 ids. So I’ve tried to run it in cycle:

def calculate_count():
    # Starts with 1 because in other iteration new range will
    # start with last item from previous range:
    result = 1
    start_id = None
    while True:
        if start_id is None:
            index_range = index.get_range(ids_only=True, limit=1000)
        else:
            index_range = index.get_range(start_id=start_id,
                                          ids_only=True, limit=1000)
        if len(index_range) > 1:
            start_id = index_range[-1].doc_id
            result += len(index_range) - 1  # Ignore last item from previous range
        else:
            return result

>>> calculate_count()
DeadlineExceededError

Yep, It takes more than 60 seconds. But then I’ve tried to run each iteration in deferred and write result to memcahe:

def calculate_count(start_id=None, result=1):
    if start_id is None:
        memcache.delete('RESULT')
        index_range = index.get_range(ids_only=True, limit=1000)
    else:
        index_range = index.get_range(start_id=start_id,
                                      ids_only=True, limit=1000)
    if len(index_range) > 1:
        deferred.defer(calculate_count,
                       index_range[-1].doc_id,
                       len(index_range) - 1 + result)
    else:
        memcache.set('RESULT', result)

>>> calculate_count()
# Wait few minutes...
>>> memcache.get('RESULT')
1398762

And It works!

James Turnbull: The Docker Book



book cover Few days ago I finished reading The Docker Book by James Turnbull and probably it’s a good book for them who start using the Docker. This book contains explanation of patterns of usage of the Docker: using for development, for testing and for deploy. About how the Docker works inside. And a lot of information about tools which simplify work with the Docker, I learned about the good tool called Fig from this book.

Maybe some parts of this book duplicates official documentation, but for me this book been a great addition to it. And maybe it’s because I didn’t used the Docker a lot before.