Larry Gonick, Woollcott Smith: The Cartoon Guide to Statistics



book cover Recently I’ve noticed that I’m lacking some basics in statistics and got recommended to read The Cartoon Guide to Statistics by Larry Gonick and Woollcott Smith. The format of the book is a bit silly, but it covers a lot of topics and explains things in easy to understand way. The book has a lot of images and some kind of related stories for explained topics.

Although I’m not a big fan of the book format, the book seems to be nice.

Betsy Beyer, Niall Richard Murphy, David K. Rensin, Kent Kawahara and Stephen Thorne: The Site Reliability Workbook



book cover white More than two years ago I’ve read SRE Book, and now I finally found a time to read The Site Reliability Workbook by Betsy Beyer, Niall Richard Murphy, David K. Rensin, Kent Kawahara and Stephen Thorne. This book is more interesting, as it’s less idealistic and contains a lot of cases from real life. The book has examples of correct and wrong SLOs, explains how to properly implement alerts based on your error budget, and even a bit covers human part of SRE.

Overall SRE Workbook is one of the best books I’ve read recently, but it might be because the last few weeks I was doing related things at work.

Measuring community opinion: subreddits reactions to a link



As everyone knows a lot of subreddits are opinionated, so I thought that it might be interesting to measure the opinion of different subreddits. Not trying to start a holy war I’ve specifically decided to ignore r/worldnews and similar subreddits, and chose a semi-random topic – “Apu reportedly being written out of The Simpsons”.

For accessing Reddit API I’ve decided to use praw, because it already implements all OAuth related stuff and almost the same as REST API.

As a first step I’ve found all posts with that URL and populated pandas DataFrame:

[*posts] = reddit.subreddit('all').search(f"url:{url}", limit=1000)

posts_df = pd.DataFrame(
    [(post.id, post.subreddit.display_name, post.title, post.score,
      datetime.utcfromtimestamp(post.created_utc), post.url,
      post.num_comments, post.upvote_ratio)
     for post in posts],
    columns=['id', 'subreddit', 'title', 'score', 'created',
             'url', 'num_comments', 'upvote_ratio'])

posts_df.head()
       id         subreddit                                                                            title  score             created                                                                              url  num_comments  upvote_ratio
0  9rmz0o        television                                            Apu to be written out of The Simpsons   1455 2018-10-26 17:49:00  https://www.indiewire.com/2018/10/simpsons-drop-apu-character-adi-shankar-12...          1802          0.88
1  9rnu73        GamerGhazi                                 Apu reportedly being written out of The Simpsons     73 2018-10-26 19:30:39  https://www.indiewire.com/2018/10/simpsons-drop-apu-character-adi-shankar-12...            95          0.83
2  9roen1  worstepisodeever                                                     The Simpsons Writing Out Apu     14 2018-10-26 20:38:21  https://www.indiewire.com/2018/10/simpsons-drop-apu-character-adi-shankar-12...            22          0.94
3  9rq7ov          ABCDesis  The Simpsons Is Eliminating Apu, But Producer Adi Shankar Found the Perfec...     26 2018-10-27 00:40:28  https://www.indiewire.com/2018/10/simpsons-drop-apu-character-adi-shankar-12...            11          0.84
4  9rnd6y         doughboys                                            Apu to be written out of The Simpsons     24 2018-10-26 18:34:58  https://www.indiewire.com/2018/10/simpsons-drop-apu-character-adi-shankar-12...             9          0.87

The easiest metric for opinion is upvote ratio:

posts_df[['subreddit', 'upvote_ratio']] \
    .groupby('subreddit') \
    .mean()['upvote_ratio'] \
    .reset_index() \
    .plot(kind='barh', x='subreddit', y='upvote_ratio',
          title='Upvote ratio', legend=False) \
    .xaxis \
    .set_major_formatter(FuncFormatter(lambda x, _: '{:.1f}%'.format(x * 100)))

But it doesn’t say us anything:

Upvote ratio

The most straightforward metric to measure is score:

posts_df[['subreddit', 'score']] \
    .groupby('subreddit') \
    .sum()['score'] \
    .reset_index() \
    .plot(kind='barh', x='subreddit', y='score', title='Score', legend=False)

Score by subreddit

A second obvious metric is a number of comments:

posts_df[['subreddit', 'num_comments']] \
    .groupby('subreddit') \
    .sum()['num_comments'] \
    .reset_index() \
    .plot(kind='barh', x='subreddit', y='num_comments',
          title='Number of comments', legend=False)

Number of comments

As absolute numbers can’t say us anything about an opinion of a subbreddit, I’ve decided to calculate normalized score and number of comments with data from the last 1000 of posts from the subreddit:

def normalize(post):
    [*subreddit_posts] = reddit.subreddit(post.subreddit.display_name).new(limit=1000)
    subreddit_posts_df = pd.DataFrame([(post.id, post.score, post.num_comments)
                                       for post in subreddit_posts],
                                      columns=('id', 'score', 'num_comments'))

    norm_score = ((post.score - subreddit_posts_df.score.mean())
                  / (subreddit_posts_df.score.max() - subreddit_posts_df.score.min()))
    norm_num_comments = ((post.num_comments - subreddit_posts_df.num_comments.mean())
                         / (subreddit_posts_df.num_comments.max() - subreddit_posts_df.num_comments.min()))

    return norm_score, norm_num_comments

normalized_vals = pd \
    .DataFrame([normalize(post) for post in posts],
               columns=['norm_score', 'norm_num_comments']) \
    .fillna(0)

posts_df[['norm_score', 'norm_num_comments']] = normalized_vals

And look at the popularity of the link based on the numbers:

posts_df[['subreddit', 'norm_score', 'norm_num_comments']] \
    .groupby('subreddit') \
    .sum()[['norm_score', 'norm_num_comments']] \
    .reset_index() \
    .rename(columns={'norm_score': 'Normalized score',
                     'norm_num_comments': 'Normalized number of comments'}) \
    .plot(kind='barh', x='subreddit',title='Normalized popularity')

Normalized popularity

As in different subreddits a link can be shared with a different title with totally different sentiments, it seemed interesting to do sentiment analysis on titles:

sid = SentimentIntensityAnalyzer()

posts_sentiments = posts_df.title.apply(sid.polarity_scores).apply(pd.Series)
posts_df = posts_df.assign(title_neg=posts_sentiments.neg,
                           title_neu=posts_sentiments.neu,
                           title_pos=posts_sentiments.pos,
                           title_compound=posts_sentiments['compound'])

And notice that people are using the same title almost every time:

posts_df[['subreddit', 'title_neg', 'title_neu', 'title_pos', 'title_compound']] \
    .groupby('subreddit') \
    .sum()[['title_neg', 'title_neu', 'title_pos', 'title_compound']] \
    .reset_index() \
    .rename(columns={'title_neg': 'Title negativity',
                     'title_pos': 'Title positivity',
                     'title_neu': 'Title neutrality',
                     'title_compound': 'Title sentiment'}) \
    .plot(kind='barh', x='subreddit', title='Title sentiments', legend=True)

Title sentiments

Sentiments of a title isn’t that interesting, but it might be much more interesting for comments. I’ve decided to only handle root comments as replies to comments might be totally not related to post subject, and they’re making everything more complicated. For comments analysis I’ve bucketed them to five buckets by compound value, and calculated mean normalized score and percentage:

posts_comments_df = pd \
    .concat([handle_post_comments(post) for post in posts]) \  # handle_post_comments is huge and available in the gist
    .fillna(0)

>>> posts_comments_df.head()
      key  root_comments_key  root_comments_neg_neg_amount  root_comments_neg_neg_norm_score  root_comments_neg_neg_percent  root_comments_neg_neu_amount  root_comments_neg_neu_norm_score  root_comments_neg_neu_percent  root_comments_neu_neu_amount  root_comments_neu_neu_norm_score  root_comments_neu_neu_percent  root_comments_pos_neu_amount  root_comments_pos_neu_norm_score  root_comments_pos_neu_percent  root_comments_pos_pos_amount  root_comments_pos_pos_norm_score  root_comments_pos_pos_percent root_comments_post_id
0  9rmz0o                  0                          87.0                         -0.005139                       0.175758                          98.0                          0.019201                       0.197980                         141.0                         -0.007125                       0.284848                          90.0                         -0.010092                       0.181818                            79                          0.006054                       0.159596                9rmz0o
0  9rnu73                  0                          12.0                          0.048172                       0.134831                          15.0                         -0.061331                       0.168539                          35.0                         -0.010538                       0.393258                          13.0                         -0.015762                       0.146067                            14                          0.065402                       0.157303                9rnu73
0  9roen1                  0                           9.0                         -0.094921                       0.450000                           1.0                          0.025714                       0.050000                           5.0                          0.048571                       0.250000                           0.0                          0.000000                       0.000000                             5                          0.117143                       0.250000                9roen1
0  9rq7ov                  0                           1.0                          0.476471                       0.100000                           2.0                         -0.523529                       0.200000                           0.0                          0.000000                       0.000000                           1.0                         -0.229412                       0.100000                             6                          0.133333                       0.600000                9rq7ov
0  9rnd6y                  0                           0.0                          0.000000                       0.000000                           0.0                          0.000000                       0.000000                           0.0                          0.000000                       0.000000                           5.0                         -0.027778                       0.555556                             4                          0.034722                       0.444444                9rnd6y

So now we can get a percent of comments by sentiments buckets:

percent_columns = ['root_comments_neg_neg_percent',
                   'root_comments_neg_neu_percent', 'root_comments_neu_neu_percent',
                   'root_comments_pos_neu_percent', 'root_comments_pos_pos_percent']

posts_with_comments_df[['subreddit'] + percent_columns] \
    .groupby('subreddit') \
    .mean()[percent_columns] \
    .reset_index() \
    .rename(columns={column: column[13:-7].replace('_', ' ')
                     for column in percent_columns}) \
    .plot(kind='bar', x='subreddit', legend=True,
          title='Percent of comments by sentiments buckets') \
    .yaxis \
    .set_major_formatter(FuncFormatter(lambda y, _: '{:.1f}%'.format(y * 100)))

It’s easy to spot that on less popular subreddits comments are more opinionated:

Comments sentiments

The same can be spotted with mean normalized scores:

norm_score_columns = ['root_comments_neg_neg_norm_score',
                      'root_comments_neg_neu_norm_score',
                      'root_comments_neu_neu_norm_score',
                      'root_comments_pos_neu_norm_score',
                      'root_comments_pos_pos_norm_score']

posts_with_comments_df[['subreddit'] + norm_score_columns] \
    .groupby('subreddit') \
    .mean()[norm_score_columns] \
    .reset_index() \
    .rename(columns={column: column[13:-10].replace('_', ' ')
                     for column in norm_score_columns}) \
    .plot(kind='bar', x='subreddit', legend=True,
          title='Mean normalized score of comments by sentiments buckets')

Comments normalized score

Although those plots are fun even with that link, it’s more fun with something more controversial. I’ve picked one of the recent posts from r/worldnews, and it’s easy to notice that different subreddits present the news in a different way:

Hot title sentiment

And comments are rated differently, some subreddits are more neutral, some definitely not:

Hot title sentiment

Gist with full source code.

Analysing the trip to South America with a bit of image recognition



Back in September, I had a three weeks trip to South America. While planning the trip I was using sort of data mining to select the most optimal flights and it worked well. To continue following the data-driven approach (more buzzwords), I’ve decided to analyze the data I’ve collected during the trip.

Unfortunately, I was traveling without local sim-card and almost without internet, I can’t use Google Location History as in the fun research about the commute. But at least I have tweets and a lot of photos.

At first, I’ve reused old code (more internal linking) and extracted information about flights from tweets:

all_tweets = pd.DataFrame(
    [(tweet.text, tweet.created_at) for tweet in get_tweets()],  # get_tweets available in the gist
    columns=['text', 'created_at'])

tweets_in_dates = all_tweets[
    (all_tweets.created_at > datetime(2018, 9, 8)) & (all_tweets.created_at < datetime(2018, 9, 30))]

flights_tweets = tweets_in_dates[tweets_in_dates.text.str.upper() == tweets_in_dates.text]

flights = flights_tweets.assign(start=lambda df: df.text.str.split('✈').str[0],
                                finish=lambda df: df.text.str.split('✈').str[-1]) \
                        .sort_values('created_at')[['start', 'finish', 'created_at']]
>>> flights
   start finish          created_at
19  AMS    LIS 2018-09-08 05:00:32
18  LIS    GIG 2018-09-08 11:34:14
17  SDU    EZE 2018-09-12 23:29:52
16  EZE    SCL 2018-09-16 17:30:01
15  SCL    LIM 2018-09-19 16:54:13
14  LIM    MEX 2018-09-22 20:43:42
13  MEX    CUN 2018-09-25 19:29:04
11  CUN    MAN 2018-09-29 20:16:11

Then I’ve found a json dump with airports, made a little hack with replacing Ezeiza with Buenos-Aires and found cities with lengths of stay from flights:

flights = flights.assign(
    start=flights.start.apply(lambda code: iata_to_city[re.sub(r'\W+', '', code)]),  # Removes leftovers of emojis, iata_to_city available in the gist
    finish=flights.finish.apply(lambda code: iata_to_city[re.sub(r'\W+', '', code)]))
cities = flights.assign(
    spent=flights.created_at - flights.created_at.shift(1),
    city=flights.start,
    arrived=flights.created_at.shift(1),
)[["city", "spent", "arrived"]]
cities = cities.assign(left=cities.arrived + cities.spent)[cities.spent.dt.days > 0]
>>> cities
              city           spent             arrived                left
17  Rio De Janeiro 4 days 11:55:38 2018-09-08 11:34:14 2018-09-12 23:29:52
16  Buenos-Aires   3 days 18:00:09 2018-09-12 23:29:52 2018-09-16 17:30:01
15  Santiago       2 days 23:24:12 2018-09-16 17:30:01 2018-09-19 16:54:13
14  Lima           3 days 03:49:29 2018-09-19 16:54:13 2018-09-22 20:43:42
13  Mexico City    2 days 22:45:22 2018-09-22 20:43:42 2018-09-25 19:29:04
11  Cancun         4 days 00:47:07 2018-09-25 19:29:04 2018-09-29 20:16:11

>>> cities.plot(x="city", y="spent", kind="bar",
                legend=False, title='Cities') \
          .yaxis.set_major_formatter(formatter)  # Ugly hack for timedelta formatting, more in the gist

Cities

Now it’s time to work with photos. I’ve downloaded all photos from Google Photos, parsed creation dates from Exif, and “joined” them with cities by creation date:

raw_photos = pd.DataFrame(list(read_photos()), columns=['name', 'created_at'])  # read_photos available in the gist

photos_cities = raw_photos.assign(key=0).merge(cities.assign(key=0), how='outer')
photos = photos_cities[
    (photos_cities.created_at >= photos_cities.arrived)
    & (photos_cities.created_at <= photos_cities.left)
]
>>> photos.head()
                          name          created_at  key            city           spent             arrived                left
1   photos/20180913_183207.jpg 2018-09-13 18:32:07  0    Buenos-Aires   3 days 18:00:09 2018-09-12 23:29:52 2018-09-16 17:30:01
6   photos/20180909_141137.jpg 2018-09-09 14:11:36  0    Rio De Janeiro 4 days 11:55:38 2018-09-08 11:34:14 2018-09-12 23:29:52
14  photos/20180917_162240.jpg 2018-09-17 16:22:40  0    Santiago       2 days 23:24:12 2018-09-16 17:30:01 2018-09-19 16:54:13
22  photos/20180923_161707.jpg 2018-09-23 16:17:07  0    Mexico City    2 days 22:45:22 2018-09-22 20:43:42 2018-09-25 19:29:04
26  photos/20180917_111251.jpg 2018-09-17 11:12:51  0    Santiago       2 days 23:24:12 2018-09-16 17:30:01 2018-09-19 16:54:13

After that I’ve got the amount of photos by city:

photos_by_city = photos \
    .groupby(by='city') \
    .agg({'name': 'count'}) \
    .rename(columns={'name': 'photos'}) \
    .reset_index()
>>> photos_by_city
             city  photos
0  Buenos-Aires    193
1  Cancun          292
2  Lima            295
3  Mexico City     256
4  Rio De Janeiro  422
5  Santiago        267
>>> photos_by_city.plot(x='city', y='photos', kind="bar",
                        title='Photos by city', legend=False)

Cities

Let’s go a bit deeper and use image recognition, to not reinvent the wheel I’ve used a slightly modified version of TensorFlow imagenet tutorial example and for each photo find what’s on it:

classify_image.init()
tags = tagged_photos.name\
    .apply(lambda name: classify_image.run_inference_on_image(name, 1)[0]) \
    .apply(pd.Series)

tagged_photos = photos.copy()
tagged_photos[['tag', 'score']] = tags.apply(pd.Series)
tagged_photos['tag'] = tagged_photos.tag.apply(lambda tag: tag.split(', ')[0])
>>> tagged_photos.head()
                          name          created_at  key            city           spent             arrived                left       tag     score
1   photos/20180913_183207.jpg 2018-09-13 18:32:07  0    Buenos-Aires   3 days 18:00:09 2018-09-12 23:29:52 2018-09-16 17:30:01  cinema    0.164415
6   photos/20180909_141137.jpg 2018-09-09 14:11:36  0    Rio De Janeiro 4 days 11:55:38 2018-09-08 11:34:14 2018-09-12 23:29:52  pedestal  0.667128
14  photos/20180917_162240.jpg 2018-09-17 16:22:40  0    Santiago       2 days 23:24:12 2018-09-16 17:30:01 2018-09-19 16:54:13  cinema    0.225404
22  photos/20180923_161707.jpg 2018-09-23 16:17:07  0    Mexico City    2 days 22:45:22 2018-09-22 20:43:42 2018-09-25 19:29:04  obelisk   0.775244
26  photos/20180917_111251.jpg 2018-09-17 11:12:51  0    Santiago       2 days 23:24:12 2018-09-16 17:30:01 2018-09-19 16:54:13  seashore  0.24720

So now it’s possible to find things that I’ve taken photos of the most:

photos_by_tag = tagged_photos \
    .groupby(by='tag') \
    .agg({'name': 'count'}) \
    .rename(columns={'name': 'photos'}) \
    .reset_index() \
    .sort_values('photos', ascending=False) \
    .head(10)
>>> photos_by_tag
            tag  photos
107  seashore    276   
76   monastery   142   
64   lakeside    116   
86   palace      115   
3    alp         86    
81   obelisk     72    
101  promontory  50    
105  sandbar     49    
17   bell cote   43    
39   cliff       42
>>> photos_by_tag.plot(x='tag', y='photos', kind='bar',
                       legend=False, title='Popular tags')

Popular tags

Then I was able to find what I was taking photos of by city:

popular_tags = photos_by_tag.head(5).tag
popular_tagged = tagged_photos[tagged_photos.tag.isin(popular_tags)]
not_popular_tagged = tagged_photos[~tagged_photos.tag.isin(popular_tags)].assign(
    tag='other')
by_tag_city = popular_tagged \
    .append(not_popular_tagged) \
    .groupby(by=['city', 'tag']) \
    .count()['name'] \
    .unstack(fill_value=0)
>>> by_tag_city
tag             alp  lakeside  monastery  other  palace  seashore
city                                                             
Buenos-Aires    5    1         24         123    30      10      
Cancun          0    19        6          153    4       110     
Lima            0    25        42         136    38      54      
Mexico City     7    9         26         197    5       12      
Rio De Janeiro  73   45        17         212    4       71      
Santiago        1    17        27         169    34      19     
>>> by_tag_city.plot(kind='bar', stacked=True)

Tags by city

Although the most common thing on this plot is “other”, it’s still fun.

Gist with full sources.

Holden Karau, Rachel Warren: High Performance Spark



book cover white Recently I’ve started to use Spark more and more, so I’ve decided to read something about it. High Performance Spark by Holden Karau and Rachel Warren looked like an interesting book, and I already had it from some HIB bundle. The book is quite short, but it covers a lot of topics. It has a lot of technics to make Spark faster and avoid common bottlenecks with explanation and sometimes even going down to Spark internals. Although I’m mostly using PySpark and almost everything in the book is in Scala, it was still useful as API is mostly the same.

Some parts of High Performance Spark are like config key/param – sort of documentation, but most of the book is ok.

Video from subtitles or Bob's Burgers to The Simpsons with TensorFlow



Bob's Burgers to The Simpsons

Back in June I’ve played a bit with subtitles and tried to generate a filmstrip, it wasn’t that much successful, but it was fun. So I decided to try to go deeper and generate a video from subtitles. The main idea is to get phrases from a part of some video, get the most similar phrases from another video and generate something.

As the “enemy” I’ve decided to use a part from Bob’s Burgers Tina-rannosaurus Wrecks episode:

As the source, I’ve decided to use The Simpsons, as they have a lot of episodes and Simpsons Already Did It whatever. I somehow have 671 episode and managed to get perfectly matching subtitles for 452 of them.

TLDR: It was fun, but the result is meh at best:

Initially, I was planning to use Friends and Seinfeld but the result was even worse.

As the first step I’ve parsed subtitles (boring, available in the gist) and created a mapping from phrases and “captions” (subtitles parts with timing and additional data) and a list of phrases from all available subtitles:

data_text2captions = defaultdict(lambda: [])
for season in root.glob('*'):
    if season.is_dir():
        for subtitles in season.glob('*.srt'):
            for caption in read_subtitles(subtitles.as_posix()):
                data_text2captions[caption.text].append(caption)

data_texts = [*data_text2captions]
>>> data_text2captions["That's just a dog in a spacesuit!"]
[Caption(path='The Simpsons S13E06 She of Little Faith.srt', start=127795000, length=2544000, text="That's just a dog in a spacesuit!")]
>>> data_texts[0]
'Give up, Mr. Simpson! We know you have the Olympic torch!'

After that I’ve found subtitles for the Bob’s Burgers episode and manually selected parts from the part of the episode that I’ve used as the “enemy” and processed them in a similar way:

play = [*read_subtitles('Bobs.Burgers.S03E07.HDTV.XviD-AFG.srt')][1:54]
play_text2captions = defaultdict(lambda: [])
for caption in play:
    play_text2captions[caption.text].append(caption)

play_texts = [*play_text2captions]
>>> play_text2captions[ 'Who raised you?']
[Caption(path='Bobs.Burgers.S03E07.HDTV.XviD-AFG.srt', start=118605000, length=1202000, text='Who raised you?')]
>>> play_texts[0]
"Wow, still can't believe this sale."

Then I’ve generated vectors for all phrases with TensorFlow’s The Universal Sentence Encoder and used cosine similarity to get most similar phrases:

module_url = "https://tfhub.dev/google/universal-sentence-encoder/2"
embed = hub.Module(module_url)

vec_a = tf.placeholder(tf.float32, shape=None)
vec_b = tf.placeholder(tf.float32, shape=None)

normalized_a = tf.nn.l2_normalize(vec_a, axis=1)
normalized_b = tf.nn.l2_normalize(vec_b, axis=1)
sim_scores = -tf.acos(tf.reduce_sum(tf.multiply(normalized_a, normalized_b), axis=1))


def get_similarity_score(text_vec_a, text_vec_b):
    emba, embb, scores = session.run(
        [normalized_a, normalized_b, sim_scores],
        feed_dict={
            vec_a: text_vec_a,
            vec_b: text_vec_b
        })
    return scores


def get_most_similar_text(vec_a, data_vectors):
    scores = get_similarity_score([vec_a] * len(data_texts), data_vectors)
    return data_texts[sorted(enumerate(scores), key=lambda score: -score[1])[3][0]]


with tf.Session() as session:
    session.run([tf.global_variables_initializer(), tf.tables_initializer()])
    data_vecs, play_vecs = session.run([embed(data_texts), embed(play_texts)])
    data_vecs = np.array(data_vecs).tolist()
    play_vecs = np.array(play_vecs).tolist()

    similar_texts = {play_text: get_most_similar_text(play_vecs[n], data_vecs)
                     for n, play_text in enumerate(play_texts)}
>> similar_texts['Is that legal?']
"- [Gasping] - Uh, isn't that illegal?"
>>> similar_texts['(chuckling): Okay, okay.']
'[ Laughing Continues ] All right. Okay.

Looks kind of relevant, right? Unfortunately only phrase by phrase.

After that, I’ve cut parts of The Simpsons episodes for matching phrases. This part was a bit complicated, because without a force re-encoding (with the same encoding) and setting a framerate (with kind of the same framerate with most of the videos) it was producing unplayable videos:

def generate_parts():
    for n, caption in enumerate(play):
        similar = similar_texts[caption.text]
        similar_caption = sorted(
            data_text2captions[similar],
            key=lambda maybe_similar: abs(caption.length - maybe_similar.length),
            reverse=True)[0]

        yield Part(
            video=similar_caption.path.replace('.srt', '.mp4'),
            start=str(timedelta(microseconds=similar_caption.start))[:-3],
            end=str(timedelta(microseconds=similar_caption.length))[:-3],
            output=Path(output_dir).joinpath(f'part_{n}.mp4').as_posix())


parts = [*generate_parts()]
for part in parts:
    call(['ffmpeg', '-y', '-i', part.video,
          '-ss', part.start, '-t', part.end,
          '-c:v', 'libx264', '-c:a', 'aac', '-strict', 'experimental',
          '-vf', 'fps=30',
          '-b:a', '128k', part.output])
>>> parts[0]
Part(video='The Simpsons S09E22 Trash of the Titans.mp4', start='0:00:31.531', end='0:00:03.003', output='part_0.mp4')

And at the end I’ve generated a special file for the FFmpeg concat and concatenated the generated parts (also with re-encoding):

concat = '\n'.join(f"file '{part.output}'" for part in parts) + '\n'
with open('concat.txt', 'w') as f:
    f.write(concat)
cat concat.txt | head -n 5
file 'parts/part_0.mp4'
file 'parts/part_1.mp4'
file 'parts/part_2.mp4'
file 'parts/part_3.mp4'
file 'parts/part_4.mp4'
call(['ffmpeg', '-y', '-safe', '0', '-f', 'concat', '-i', 'concat.txt',
      '-c:v', 'libx264', '-c:a', 'aac', '-strict', 'experimental',
      '-vf', 'fps=30', 'output.mp4'])

As the result is kind of meh, but it was fun, I’m going to try to do that again with a bigger dataset, even working with FFmpeg wasn’t fun at all.

Gist with full sources.

Christina Wodtke: Introduction to OKRs



book cover For a better understanding of OKRs, I’ve decided to read Introduction to OKRs by Christina Wodtke. It’s a very short book, but it explains why and how to set OKRs, and how to keep track of them. The book isn’t deep or something, but it contains almost everything I wanted to know about OKRs.

In contrast with some longer books, it’s very nice to read something that’s not trying to repeat the same content ten times all over the book.

Camille Fournier: The Manager's Path



book cover As sometimes I need to take a role of a manager of a small team, I decided to read something about that. So I chose to read The Manager’s Path by Camille Fournier. The book covers different stages of manager’s growth with advice and related stories. And it’s interesting to read, for me it was kind of leisure reading.

Although most parts of the book were way over my level, it was fun to read about different roles and expectations from people working on those roles. I’ve probably even understood the difference between VP and CTO.

Steven Bird, Ewan Klein, and Edward Loper: Natural Language Processing with Python



book cover white Recently I’ve noticed that my NLP knowledge isn’t that good. So I decided to read Natural Language Processing with Python by Steven Bird, Ewan Klein, and Edward Loper, and it’s a nice book. It explains core concepts of NLP and how, where and why to use NLTK. After each chapter, it has lots of exercises. I’ve tried to do most of them and spent a bit too much time on the book.

Although some parts of the book are too basic, it even has parts about using Python data-structures.

How I was planning a trip to South America with JavaScript, Python and Google Flights abuse



I was planning a trip to South America for a while. As I have flexible dates and want to visit a few places, it was very hard to find proper flights. So I decided to try to automatize everything.

I’ve already done something similar before with Clojure and Chrome, but it was only for a single flight and doesn’t work anymore.

Parsing flights information

Apparently, there’s no open API for getting information about flights. But as Google Flights can show a calendar with prices for dates for two months I decided to use it:

Calendar with prices

So I’ve generated every possible combination of interesting destinations in South America and flights to and from Amsterdam. Simulated user interaction with changing destination inputs and opening/closing calendar. By the end, I wrote results as JSON in a new tab. The whole code isn’t that interesting and available in the gist. From the high level it looks like:

const getFlightsData = async ([from, to]) => {
  await setDestination(FROM, from);
  await setDestination(TO, to);

  const prices = await getPrices();

  return prices.map(([date, price]) => ({
    date, price, from, to,
  }));
};

const collectData = async () => {
  let result = [];
  for (let flight of getAllPossibleFlights()) {
    const flightsData = await getFlightsData(flight);
    result = result.concat(flightsData);
  }
  return result;
};

const win = window.open('');

collectData().then(
  (data) => win.document.write(JSON.stringify(data)),
  (error) => console.error("Can't get flights", error),
);

In action:

I’ve run it twice to have separate data for flights with and without stops, and just saved the result to JSON files with content like:

[{"date":"2018-07-05","price":476,"from":"Rio de Janeiro","to":"Montevideo"},
{"date":"2018-07-06","price":470,"from":"Rio de Janeiro","to":"Montevideo"},
{"date":"2018-07-07","price":476,"from":"Rio de Janeiro","to":"Montevideo"},
...]

Although, it mostly works, in some rare cases it looks like Google Flights has some sort of anti-parser and show “random” prices.

Selecting the best trips

In the previous part, I’ve parsed 10110 flights with stop and 6422 non-stop flights, it’s impossible to use brute force algorithm here (I’ve tried). As reading data from JSON isn’t interesting, I’ll skip that part.

At first, I’ve built an index of from destinationdayto destination:

from_id2day_number2to_id2flight = defaultdict(
    lambda: defaultdict(
        lambda: {}))
for flight in flights:
    from_id2day_number2to_id2flight[flight.from_id] \
        [flight.day_number][flight.to_id] = flight

Created a recursive generator that creates all possible trips:

def _generate_trips(can_visit, can_travel, can_spent, current_id,
                    current_day, trip_flights):
    # The last flight is to home city, the end of the trip
    if trip_flights[-1].to_id == home_city_id:
        yield Trip(
            price=sum(flight.price for flight in trip_flights),
            flights=trip_flights)
        return

    # Everything visited or no vacation days left or no money left
    if not can_visit or can_travel < MIN_STAY or can_spent == 0:
        return

    # The minimal amount of cities visited, can start "thinking" about going home
    if len(trip_flights) >= MIN_VISITED and home_city_id not in can_visit:
        can_visit.add(home_city_id)

    for to_id in can_visit:
        can_visit_next = can_visit.difference({to_id})
        for stay in range(MIN_STAY, min(MAX_STAY, can_travel) + 1):
            current_day_next = current_day + stay
            flight_next = from_id2day_number2to_id2flight \
                .get(current_id, {}).get(current_day_next, {}).get(to_id)
            if not flight_next:
                continue

            can_spent_next = can_spent - flight_next.price
            if can_spent_next < 0:
                continue

            yield from _generate_trips(
                can_visit_next, can_travel - stay, can_spent_next, to_id,
                                current_day + stay, trip_flights + [flight_next])

As the algorithm is easy to parallel, I’ve made it possible to run with Pool.pool.imap_unordered, and pre-sort for future sorting with merge sort:

def _generator_stage(params):
    return sorted(_generate_trips(*params), key=itemgetter(0))

Then generated initial flights and other trip flights in parallel:

def generate_trips():
    generators_params = [(
        city_ids.difference({start_id, home_city_id}),
        MAX_TRIP,
        MAX_TRIP_PRICE - from_id2day_number2to_id2flight[home_city_id][start_day][start_id].price,
        start_id,
        start_day,
        [from_id2day_number2to_id2flight[home_city_id][start_day][start_id]])
        for start_day in range((MAX_START - MIN_START).days)
        for start_id in from_id2day_number2to_id2flight[home_city_id][start_day].keys()]

    with Pool(cpu_count() * 2) as pool:
        for n, stage_result in enumerate(pool.imap_unordered(_generator_stage, generators_pa
rams)):
            yield stage_result

And sorted everything with heapq.merge:

trips = [*merge(*generate_trips(), key=itemgetter(0))]

Looks like a solution to a job interview question.

Without optimizations, it was taking more than an hour and consumed almost whole RAM (apparently typing.NamedTuple isn’t memory efficient with multiprocessing at all), but current implementation takes 1 minute 22 seconds on my laptop.

As the last step I’ve saved results in csv (the code isn’t interesting and available in the gist), like:

price,days,cities,start city,start date,end city,end date,details
1373,15,4,La Paz,2018-09-15,Buenos Aires,2018-09-30,Amsterdam -> La Paz 2018-09-15 498 & La Paz -> Santiago 2018-09-18 196 & Santiago -> Montevideo 2018-09-23 99 & Montevideo -> Buenos Aires 2018-09-26 120 & Buenos Aires -> Amsterdam 2018-09-30 460
1373,15,4,La Paz,2018-09-15,Buenos Aires,2018-09-30,Amsterdam -> La Paz 2018-09-15 498 & La Paz -> Santiago 2018-09-18 196 & Santiago -> Montevideo 2018-09-23 99 & Montevideo -> Buenos Aires 2018-09-27 120 & Buenos Aires -> Amsterdam 2018-09-30 460
1373,15,4,La Paz,2018-09-15,Buenos Aires,2018-09-30,Amsterdam -> La Paz 2018-09-15 498 & La Paz -> Santiago 2018-09-20 196 & Santiago -> Montevideo 2018-09-23 99 & Montevideo -> Buenos Aires 2018-09-26 120 & Buenos Aires -> Amsterdam 2018-09-30 460
1373,15,4,La Paz,2018-09-15,Buenos Aires,2018-09-30,Amsterdam -> La Paz 2018-09-15 498 & La Paz -> Santiago 2018-09-20 196 & Santiago -> Montevideo 2018-09-23 99 & Montevideo -> Buenos Aires 2018-09-27 120 & Buenos Aires -> Amsterdam 2018-09-30 460
...

Gist with sources.