Async code without callbacks with CoffeeScript generators



Sometimes it’s very hard to understand code with a bunch of callbacks, even if it with promises. But in ES6 and in CoffeeScript 1.9 we got generators, so maybe we can avoid callbacks with them, and use something like tornado.gen?

And we can, let’s look at this little helper function:

gen = (fn) ->
  new Promise (resolve, reject) ->
    generator = fn()

    putInGenerator = (method) -> (val) ->
      try
        handlePromise generator[method](val)
      catch error
        reject error

    handlePromise = ({value, done}) ->
      if done
        resolve value
      else if value and value.then
        value.then putInGenerator('next'), putInGenerator('throw')
      else
        reject "Value isn't a promise!"

    handlePromise generator.next()

With it code like:

$http.get('/users/').then ({data}) ->
  doSomethingWithUsers data.users
  $http.get '/posts/'
, (err) ->
  console.log "Can't receive users", err
.then ({data}) ->
  doSomethingWithPosts data.posts
, (err) ->
  console.log "Can't receive posts", err

Can be transformed to something like:

gen ->
  try
    {data: usersData} = yield $http.get '/users/'
  catch err
    console.log "Can't receive users", err
    return
  doSomethingWithUsers usersData.users

  try
    {data: postsData} = yield $http.get '/posts/'
  catch err
    console.log "Can't receive posts", err
    return
  doSomethingWithPosts postsData.posts

Isn’t it cool? But more, result of gen is a promise, so we can write something like:

getUsers = (url) -> gen ->
  {data: {users}} = yield $http.get(url)
  users.map prepareUser

getPosts = (url) -> gen ->
  {data: {posts}} = yield $http.get(url)
  posts.map preparePosts

gen ->
  try
    users = yield getUsers '/users/'
    posts = yield getPosts '/posts/'
  catch err
    console.log "Something goes wrong", err
    return

   doSomethingWithUsers users
   doSomethingWithPosts posts

So, what gen do:

  1. Creates main promise, which will be returned from gen.
  2. Sends nothing to generator and receives first promise.
  3. If promise succeed, sends result of this promise to the generator. If failed — throws an error to the generator. If we got an exception during .next or .throw — rejects main promise with that exception.
  4. Receives new value from the generator, if the generator is done — resolves main promise with received value, if the value is a promise — repeats the third step, otherwise — rejects main promise.

Reinventing OOP with Clojure



From books all we know that main principles of OOP is polymorphism and encapsulation, but other meaning is that the significant aspect of OOP is a message passing. And in Clojure we have a cool library for dealing with messages – core.async. So we can build simple “object” with it, and we can use core.match for “parsing” messages in this “object”. Yep, there will be something like Erlang actors:

(require '[clojure.core.async :refer [go go-loop chan <! >! >!! <!!]])
(require '[clojure.core.match :refer [match]])

(def dog
  (let [messages (chan)]
    (go-loop []
      (match (<! messages)
        [:bark!] (println "Bark! Bark!")
        [:say! x] (println "Dog said:" x))
      (recur))
    messages))

Here I’ve just created channel and in the go-loop matched received messages from them with registered messages patterns.

Format of messages is [:name & args].

We can easily test dog object by putting message in the channel:

user=> (>!! dog [:bark!])
# Bark! Bark!

user=> (>!! dog [:say! "Hello world!"])

# Dog said: Hello world!

Looks awesome, but maybe we should add a state? It’s pretty simple:

(def stateful-dog
  (let [calls (chan)]
    (go-loop [state {:barked 0}]
      (recur (match (<! calls)
               [:bark!] (do (println "Bark! Bark!")
                            (update-in state [:barked]
                                       inc))
               [:how-many-barks?] (do (println (:barked state))
                                      state))))
    calls))

I’ve just put default state in the bindings for go-loop and recur it with new state after processing messages. And we can test it:

user=> (>!! stateful-dog [:bark!])
# Bark! Bark!

user=> (>!! stateful-dog [:how-many-barks?])
# 1

user=> (>!! stateful-dog [:bark!])
# Bark! Bark!

user=> (>!! stateful-dog [:bark!])
# Bark! Bark!

user=> (>!! stateful-dog [:how-many-barks?])
# 3

Great, but what if we want to receive result of the method? It’s simple too:

(def answering-dog
  (let [calls (chan)]
    (go-loop [state {:barked 0}]
      (recur (match (<! calls)
               [:bark! _] (do (println "Bark! Bark!")
                              (update-in state [:barked]
                                         inc))
               [:how-many-barks? result] (do (>! result (:barked state))
                                             state))))
    calls))

I’ve just set a channel as a last argument of the message and put result in it. It’s not that simple to use like previous examples, but it’s ok:

user=> (>!! answering-dog [:bark!  (chan)])
# Bark! Bark!

user=> (>!! answering-dog [:bark!  (chan)])
# Bark! Bark!

user=> (let [result (chan)]
  #_=>   (>!! answering-dog [:how-many-barks? result])
  #_=>   (<!! result))
2

Last call looks too complex, let’s add a few helpers to make it easier:

(defn call
  [obj & msg]
  (go (let [result (chan)]
        (>! obj (conj (vec msg) result))
        (<! result))))

(defn call!!
  [obj & msg]
  (<!! (apply call obj msg)))

call!! should be used only outside of go-block, call — in combination with <! and <!!. Let’s look to them in action:

user=> (call!! answering-dog :how-many-barks?)
2

user=> (<!! (call answering-dog :how-many-barks?))
2

user=> (call!! answering-dog :set-barks!)
# Exception in thread "async-dispatch-33" java.lang.IllegalArgumentException: No matching clause: [:set-barks!...

user=> (call!! answering-dog :how-many-barks?)
# ...

So now we have a problem, when error happens in a object – object dies and no longer sends responses to messages. So we should add try/except to all methods, better to use macros for automating that. But before we should define format of response:

  • [:ok val] – all ok;
  • [:error error-reason] – error happened;
  • [:none] – we can’t put just nil in a channel, so we’ll use this.

Yep, you can notice that this looks like Maybe/Option monad.

So let’s write macroses:

(defn ok! [ch val] (go (>! ch [:ok val])))

(defn error! [ch reason] (go (>! ch [:error reason])))

(defn none! [ch] (go (>! ch [:none])))

(defmacro object
  [default-state & body]
  (let [flat-body (mapcat macroexpand body)]
    `(let [calls# (chan)]
       (go-loop ~default-state
         (recur (match (<! calls#)
                  ~@flat-body
                  [& msg#] (do (error! (last msg#) [:method-not-found (first msg#)])
                               ~@(take-nth 2 default-state)))))
       calls#)))

(defmacro method
  [pattern & body]
  [pattern `(try (do ~@body)
                 (catch Exception e#
                   (error! ~(last pattern) e#)))])

Macro object can be used for creating objects and macro method — for defining methods inside the object. Here you could notice that [& msg#] works exactly like method_missing in Ruby.

So now we can create objects using this macroses:

(defn make-cat
  [name]
  (object [state {:age 10
                  :name name}]
    (method [:get-name result]
      (ok! result (:name state))
      state)
    (method [:set-name! new-name result]
      (none! result)
      (assoc state :name new-name))
    (method [:make-older! result]
      (error! result :not-implemented)
      state)))

(def cat (make-cat "Simon"))

We created object cat with methods get-name, set-name! and make-older!, make-cat is a improvised constructor. This object can be used like all previous objects, but in combination with core.match it’ll be more useful:

user=> (match (call!! cat :get-name)
  #_=>   [:ok val] (println val))
# Simon

user=> (match (call!! cat :set-name! "UltraSimon")
  #_=>   [:none] (println "Name changed"))
# Name changed

user=> (match (call!! cat :get-name)
  #_=>   [:ok val] (println val))
# UltraSimon

user=> (match (call!! cat :make-older!)
  #_=>   [:ok age] (println "Now - " age)
  #_=>   [:error reason] (println "Failed with " reason))
# Failed with  :not-implemented

user=> (match (call!! cat :i-don't-know-what)
  #_=>   [:error _] (println "Failed"))
# Failed

Looks perfect! But that’s not all, later I’ll implement a inheritance on top of this mess.

Serving static using nginx with docker



Imagine common situation when we have a container with a web application and a container with nginx, and we want to serve the web app static using nginx. Sounds very simple but actually it isn’t.

Before I start, source code of the simple web application which I want to run inside a container:

from flask import Flask, url_for

app = Flask(__name__)

@app.route("/")
def main():
    return '''<h1>Hello world!</h1>
              <img src='{}' />'''.format(url_for('static', filename='image.png'))

if __name__ == '__main__':
    app.run(host='0.0.0.0')

And also for proper work of this app there should be image in static/image.png.

Back to the task, the first idea – put static in a volume. So Dockerfile for the application should be like this:

FROM python:3.4
EXPOSE 5000
VOLUME static
COPY . .
RUN pip install flask
ENTRYPOINT python main.py

It’s simple to build and to test:

docker build -t example/app .
docker run -p 5000:5000 --name app example/app

After that you can visit localhost:5000 and ensure that app works.

Now go to the container for nginx. First of all we should write config for nginx:

upstream app_upstream {
  server app:5000;
}

server {
  listen 80;

  location /static {
    alias /static;
  }

  location / {
    proxy_pass  http://app_upstream;
  }
}

And Dockerfile for it:

FROM nginx:1.7
RUN rm /etc/nginx/conf.d/*
COPY app.conf /etc/nginx/conf.d/

So now we can build and run nginx:

docker build -t example/nginx .
docker run -p 8080:80 --link app:app --volumes-from app example/nginx

And it works! You can visit localhost:8080 for ensuring. But actually it don’t – it’ll be not that cool if we want to scale this web app. There will be one container with static and web app and also n containers with just a web app.

So, the second idea – create a data volume container with static. Dockerfile for it:

FROM ubuntu:14.04
VOLUME static
COPY . static

And now we should build it, run it and restart nginx container:

docker build -t example/static .
docker run --name static example/static
docker run -p 8080:80 --link app:app --volumes-from static example/nginx

This variant works great, but isn’t it too complex? Maybe there is a more simpler solution? And probably it exists.

And, the third idea – just cache static in nginx. So we should update nginx config to something like this:

proxy_cache_path /tmp/cache levels=1:2 keys_zone=cache:30m max_size=1G;

upstream app_upstream {
  server app:5000;
}

server {
  listen 80;

  location /static {
    proxy_cache cache;
    proxy_cache_valid 30m;
    proxy_pass  http://app_upstream;
  }

  location / {
    proxy_pass  http://app_upstream;
  }
}

And Dockerfile for nginx to:

FROM nginx:1.7
RUN mkdir /tmp/cache
RUN chown www-data /tmp/cache
RUN rm /etc/nginx/conf.d/*
COPY app.conf /etc/nginx/conf.d/

So now we can build it and run it:

docker build -t example/nginx .
docker run -p 8080:80 --link app:app example/nginx

It works well and it’s very simple, for my projects I’ve chosen that solution. But it has one drawback – for updating static we should wait 30 minutes or we should restart nginx container.

Jack Moffitt, Fred Daoud: Seven Web Frameworks in Seven Weeks



book cover white I’m a big fan of seven in seven books series of The Pragmatic Bookshelf, so I decided to read “Seven Web Frameworks in Seven Weeks: Adventures in Better Web Apps” by Jack Moffitt and Fred Daoud. And few days ago I finished, and for me this book been a very pleasant reading. It been very interesting to try untouched frameworks and examples from this book helped me a lot.

I worked before with Clojure Ring and AngularJS which were good examined in this book, so I can assume that other frameworks too. And after reading this book I think I definitely should try to write something using webmachine and Yesod.

CSP on pyboard with uasyncio



Not so far ago @pfalcon mentioned in microasync bugtracker about a port of asyncio for micropython – uasyncio. After that I ported asynchronous queue from asyncio to uasyncio, so now it can replace microasync.

So I conceived a little project: device which prints information from pyboard gyro sensor and ultrasonic sensor to OLED display. Sounds very simple, but it need to update information on display when data from one of sensors changed. So interaction with sensors should be non-blocking.

I found almost well done lib (fork which supports drawing text) for interacting with my OLED display and lib for working with ultrasonic sensor (non-blocking version).

First of all I created decorator for simplifying creating generators which returns a queue and make all interaction through it:

class OnlyChanged(Queue):
    def __init__(self, *args, **kwargs):
        self._last_val = None
        super().__init__(*args, **kwargs)

    def put(self, val):
        # Put in queue only if value changed
        if val != self._last_val:
            yield from super().put(val)
            self._last_val = val


def chan(fn):
    def wrapper(*args, **kwargs):
        q = OnlyChanged(1)
        get_event_loop().call_soon(fn(q, *args, **kwargs))
        return q
    return wrapper

So now it’s simple to write generator, which prints to display data received from the queue:

@chan
def get_display(q, *args, **kwargs):
    display = Display(*args, **kwargs)
    while True:
        lines = yield from q.get()
        display.write(lines)

>>> display = get_display(pinout={'sda': 'Y10',
...                               'scl': 'Y9'},
...                       height=64,
...                       external_vcc=False)
>>> yield from display.put('Hello world!')

oled display

And generator for the ultrasonic sensor which puts values to the queue:

@chan
def get_ultrasonic(q, *args, **kwargs):
    ultrasonic = Ultrasonic(*args, **kwargs)
    while True:
        val = yield from ultrasonic.distance_in_cm()
        yield from q.put(val)

>>> ultrasonic = get_ultrasonic('X1', 'X2')
>>> yield from ultrasonic.get()
28.012

Similar generator for the pyboard gyro sensor:

@chan
def get_gyro(q):
    accel = pyb.Accel()
    while True:
        val = accel.filtered_xyz()
        yield from q.put(val)

>>> gyro = get_gyro()
>>> yield from gyro.get()
(12, 9, 72)

And by combining all of them it’s very similar to write program for expected device:

def main():
    display = get_display(pinout={'sda': 'Y10',
                                  'scl': 'Y9'},
                          height=64,
                          external_vcc=False)
    current = {'dist': 0, 'xyz': (0, 0, 0)}
    shared_q = alts(dist=get_ultrasonic('X1', 'X2'),
                    xyz=get_gyro())
    while True:
        source, val = yield from shared_q.get()
        current[source] = val
        yield from display.put(
            'Distance: {:0.2f}cm\n'
            'x: {} y: {} z: {}'.format(current['dist'], *current['xyz']))


>>> loop = get_event_loop()
>>> loop.call_soon(main())
>>> loop.run_forever()

So the result code is very simple, all components are decoupled and it’s easy to test. Video of result:

Gist with source codes.

Getting total count of indexed documents in the GAE Search API



Around a month ago I was stuck with task – I had to get total count of indexed documents in the GAE Search API. Sounds simple, but not – this API doesn’t have a method for doing that, it has something similar – storage_usage, but this attribute contains size of index in bytes. But the API provides method for receiving ids of documents, so I’ve tried:

>>> len(index.get_range(ids_only=True))
100
>>> len(index.get_range(ids_only=True, limit=1000))
1000
>>> len(index.get_range(ids_only=True, limit=1001))
ValueError: limit, 1001 must be <= 1000

And another pitfall – get_range couldn’t return more than 1000 ids. So I’ve tried to run it in cycle:

def calculate_count():
    # Starts with 1 because in other iteration new range will
    # start with last item from previous range:
    result = 1
    start_id = None
    while True:
        if start_id is None:
            index_range = index.get_range(ids_only=True, limit=1000)
        else:
            index_range = index.get_range(start_id=start_id,
                                          ids_only=True, limit=1000)
        if len(index_range) > 1:
            start_id = index_range[-1].doc_id
            result += len(index_range) - 1  # Ignore last item from previous range
        else:
            return result

>>> calculate_count()
DeadlineExceededError

Yep, It takes more than 60 seconds. But then I’ve tried to run each iteration in deferred and write result to memcahe:

def calculate_count(start_id=None, result=1):
    if start_id is None:
        memcache.delete('RESULT')
        index_range = index.get_range(ids_only=True, limit=1000)
    else:
        index_range = index.get_range(start_id=start_id,
                                      ids_only=True, limit=1000)
    if len(index_range) > 1:
        deferred.defer(calculate_count,
                       index_range[-1].doc_id,
                       len(index_range) - 1 + result)
    else:
        memcache.set('RESULT', result)

>>> calculate_count()
# Wait few minutes...
>>> memcache.get('RESULT')
1398762

And It works!

James Turnbull: The Docker Book



book cover Few days ago I finished reading The Docker Book by James Turnbull and probably it’s a good book for them who start using the Docker. This book contains explanation of patterns of usage of the Docker: using for development, for testing and for deploy. About how the Docker works inside. And a lot of information about tools which simplify work with the Docker, I learned about the good tool called Fig from this book.

Maybe some parts of this book duplicates official documentation, but for me this book been a great addition to it. And maybe it’s because I didn’t used the Docker a lot before.

Chrome extension in ClojureScript



Sometimes I need to write/change bunch of code in GAE interactive console and sometimes I need to change build scripts in jenkins tasks. That’s not comfortable to write code in simple browser textarea.

So I decided to create Chrome extension with which I can convert textarea to the code editor (and back) in a few clicks. As an editor I selected Ace because it simple to use and I’d worked with it before. As a language I selected ClojureScript.

For someone who eager: source code of the plugin on github, in the Chrome Web Store.

Developing this extension almost similar to extension in JavaScript, and nearly like ordinary ClojureScript application. But I found a few pitfalls and differences.

ClojureScript compilation

We can’t use :optimizations :none in the Chrome extension, because of goog.require way of loading dependencies. And we should to build separate compiled js files for each background/content/options/etc “pages”. So my cljs-build configuration:

{:builds {:background {:source-paths ["src/textarea_to_code_editor/background/"]
                       :compiler {:output-to "resources/background/main.js"
                                  :output-dir "resources/background/"
                                  :source-map "resources/background/main.js.map"
                                  :optimizations :whitespace
                                  :pretty-print true}}
          :content {:source-paths ["src/textarea_to_code_editor/content/"]
                    :compiler {:output-to "resources/content/main.js"
                               :output-dir "resources/content/"
                               :source-map "resources/content/main.js.map"
                               :optimizations :whitespace
                               :pretty-print true}}}}

If you want to use :optimizations :advanced, you can download externs for Chrome API from github.

Chrome API

From a first look using of Chrome API from ClojureScript is a bit uncomfortable, but with .. macro it looks not worse than in JavaScript. For example, adding listener to runtime messages in js:

chrome.runtime.onMessage.addListener(function(msg){
    console.log(msg);
});

And in ClojureScript:

(.. js/chrome -runtime -onMessage (addListener #(.log js/console %)))

Testing

Because we can’t use Chrome API in tests I created a little function for detecting if it available:

(defn available? [] (aget js/window "chrome"))

And run all extension bootstrapping code inside of (when (available?) ...). So now it’s simple to use with-redefs and with-reset (for mocking code inside of async tests) for mocking Chrome API.

For running test I used clojurescript.test, my config:

{:builds {:test {:source-paths ["src/" "test/"]
                 :compiler {:output-to "target/cljs-test.js"
                            :optimizations :whitespace
                            :pretty-print false}}}
 :test-commands {"test" ["phantomjs" :runner
                         "resources/components/ace-builds/src/ace.js"
                         "resources/components/ace-builds/src/mode-clojure.js"
                         "resources/components/ace-builds/src/mode-python.js"
                         "resources/components/ace-builds/src/theme-monokai.js"
                         "resources/components/ace-builds/src/ext-modelist.js"
                         "target/cljs-test.js"]}}

Benefits

Message passing between the extension background and content parts it’s a little pain, because it’s always turns into huge callback hell. But core.async (and a bit of core.match) can save us, for example, handling messages on content side:

(go-loop []
  (match (<! msg-chan)
    [:populate-context-menu data sender-chan] (h/populate-context-menu! data
                                                                        (:used-modes @storage)
                                                                        sender-chan
                                                                        msg-chan)
    [:clear-context-menu _ _] (h/clear-context-menu!)
    [:update-used-modes mode _] (h/update-used-modes! storage mode)
    [& msg] (println "Unmatched message:" msg))
  (recur))

Sources of content side and backend side helpers for sending/receiving Chrome runtime messages using core.async channels.

Links

Sources on github, extension in the Chrome Web Store.

Eric Evans: Domain-Driven Design: Tackling Complexity in the Heart of Software



book cover This week I finished reading “Domain-Driven Design: Tackling Complexity in the Heart of Software” by Eric Evans and I think it’s a great book, maybe — “must read”. It contains a good explanation of patterns with example situations where they should be used. Also this book contains interesting information about software design and software development on the whole.

From the opposite side, this is quite an old book, it was published in 2003. And some parts of the book are a bit outdated, some patterns and examples are useless now. Also some examples are too “enterprise”, and classes like CargoSpecificationRepositoryFactory scare me. And after reading some examples, I assume that it was painful to be a java developer in early 00.

Mocking clojurescript code written with core.async



When I write tests for the code in clojurescript and core.async I feel little pain — with-redefs doesn’t work correctly with go-blocks. For example I have a function:

(defn get-subtitles
  [sources limit]
  (go (-> (http/get (get-url sources limit))
          <!
          :body
          format-dates)))

And the test for it:

(deftest ^:async test-get-subtitles
  (with-redefs [http/get (constantly fixture)]
    (go (is (= (<! (get-subtitles const/all 100)) expected))
        (done))))

It didn’t work, it actually tries to make http request. Ok, I can try to put with-redefs inside go-block:

(deftest ^:async test-get-subtitles
  (go (with-redefs [http/get (constantly fixture)]
        (is (= (<! (get-subtitles const/all 100)) expected)))
      (with-redefs [http/get (constantly blank-result)]
        (is (= (<! (get-subtitles const/addicted 100)) [])))
      (done)))

The first assertion works, but in the second assertion I have previously redefined http/get and it’s incorrect and the assertion fails — with-redefs permanently changes var when applied in the go-block.

So I’ve developed a little macro which works like with-redefs and can be used inside of go-block and with code without core.async:

(defmacro with-reset
  [bindings & body]
  (let [names (take-nth 2 bindings)
        vals (take-nth 2 (drop 1 bindings))
        current-vals (map #(list 'identity %) names)
        tempnames (map (comp gensym name) names)
        binds (map vector names vals)
        resets (reverse (map vector names tempnames))
        bind-value (fn [[k v]] (list 'set! k v))]
    `(let [~@(interleave tempnames current-vals)]
       (try
         ~@(map bind-value binds)
         ~@body
         (finally
           ~@(map bind-value resets))))))

And usage:

(deftest ^:async test-get-subtitles
  (go (with-reset [http/get (constantly fixture)]
        (is (= (<! (get-subtitles const/all 100)) expected)))
      (with-reset [http/get (constantly blank-result)]
        (is (= (<! (get-subtitles const/addicted 100)) [])))
      (done)))

I put this macro in clj-di for avoiding copying it between projects.

Tests for this macro on github.

UPD: example updated.