Packing Better Montages than ImageMagick with Python Rect Packer

ImageMagick has a built in Montage creating tool. It's good enough for casual montaging, but it's definitely suboptimal for packing varying size images.

All photos from: https://unsplash.com/collections/1199299/fun-with-fall-(thanksgiving%2C-autumn)

Simply using ImageMagick's montage it looks something the following. First the script that I run:

TEMP_DIRECTORY=$(mktemp -d /tmp/montageXXXXXX)
/usr/local/bin/mogrify -path ${TEMP_DIRECTORY}/ -geometry 480x480\> "$@"
/usr/local/bin/montage ${TEMP_DIRECTORY}/* -geometry +2+2 "$( dirname "$1" )"/montage.jpg

First I rescale all the images to "up-to 480x480" keeping aspect ratio, and then run the montage with a 2x2 pixel border.

Original images (just scaled down)

This looks pretty bad. Mostly because montage will not pack the rectangles more densely.

We could first resize all the images so that their height is e.g. 480px:

for f in "$@"
do
	/usr/local/bin/convert "$f" -geometry x480 "${f%.*}_480h.jpg"
done

And then running montage, to get this:

Images resized to height=480px

Already looking much better, but we have little control over the resulting size of the montage, ImageMagick just does its best job at packing everything. With similar heights - it's an easy job. However we can still see a lot of annoying whitespace on the right. What if there's a better way to pack the images?

Enter, rectpack: https://github.com/secnot/rectpack

This is a Python package implementing a few algorithms for rectangle packing, a concrete spatial instance of the classic knapsack problem (NP complete!) from computer science: https://en.wikipedia.org/wiki/Knapsack_problem

Here's my script:

import cv2
import rpack
import os
import glob
from rectpack import newPacker
import pickle
import numpy as np
import argparse

parser = argparse.ArgumentParser(description='Montage creator with rectpack')
parser.add_argument('--width', help='Output image width', default=5200, type=int)
parser.add_argument('--aspect', help='Output image aspect ratio, \
    e.g. height = <width> * <aspect>', default=1.0, type=float)
parser.add_argument('--output', help='Output image name', default='output.png')
parser.add_argument('--input_dir', help='Input directory with images', default='./')
parser.add_argument('--debug', help='Draw "debug" info', default=False, type=bool)
parser.add_argument('--border', help='Border around images in px', default=2, type=int)
args = parser.parse_args()

files = sum([glob.glob(os.path.join(args.input_dir, '*.' + e)) for e in ['jpg', 'jpeg', 'png']], [])
print('found %d files in %s' % (len(files), args.input_dir))

print('getting images sizes...')
sizes = [(im_file, cv2.imread(im_file).shape) for im_file in files]

# NOTE: you could pick a different packing algo by setting pack_algo=..., e.g. pack_algo=rectpack.SkylineBlWm
packer = newPacker(rotation=False)
for i, r in enumerate(sizes):
    packer.add_rect(r[1][1] + args.border * 2, r[1][0] + args.border * 2, rid=i)

out_w = args.width
aspect_ratio_wh = args.aspect
out_h = int(out_w * aspect_ratio_wh)

packer.add_bin(out_w, out_h)

print('packing...')
packer.pack()

output_im = np.full((out_h, out_w, 3), 255, np.uint8)

used = []

for rect in packer.rect_list():
    b, x, y, w, h, rid = rect

    used += [rid]

    orig_file_name = sizes[rid][0]
    im = cv2.imread(orig_file_name, cv2.IMREAD_COLOR)
    output_im[out_h - y - h + args.border : out_h - y - args.border, x + args.border:x+w - args.border] = im
    if args.debug:
        cv2.rectangle(output_im, (x,out_h - y - h), (x+w,out_h - y), (255,0,0), 3)
        cv2.putText(output_im, "%d"%rid, (x, out_h - y), cv2.FONT_HERSHEY_PLAIN, 3.0, (0,0,255), 2)

print('used %d of %d images' % (len(used), len(files)))

print('writing image output %s:...' % args.output)
cv2.imwrite(args.output, output_im)

print('done.')

Running it like so:

$ python3 pack.py --input_dir ~/Downloads/montage/resize480/ --width 2200 --border 10 --debug True

Resulted in this:

Montage with rectpack

That doesn't look the best, but it's definitely nice it tries to tile things together.

There are some options to consider:

$ python3 pack.py --help
usage: pack.py [-h] [--width WIDTH] [--aspect ASPECT] [--output OUTPUT]
               [--input_dir INPUT_DIR] [--debug DEBUG] [--border BORDER]

Montage creator with rectpack

optional arguments:
  -h, --help            show this help message and exit
  --width WIDTH         Output image width
  --aspect ASPECT       Output image aspect ratio, e.g. height = <width> *
                        <aspect>
  --output OUTPUT       Output image name
  --input_dir INPUT_DIR
                        Input directory with images
  --debug DEBUG         Draw "debug" info
  --border BORDER       Border around images in px

Running over the fixed height images:

$ python3 pack.py --input_dir ~/Downloads/montage/h480/ --width 4800 --aspect 0.5 --border 5 --debug True

Or:

$ python3 pack.py --input_dir ~/Downloads/montage/h480/ --width 2500 --aspect 1.2 --border 5

This gives us more control of the montage.

Enjoy!
Roy.

Share

Cylindrical image warping for panorama stitching

Hey-o
Just sharing a code snippet to warp images to cylindrical coordinates, in case you're stitching panoramas in Python OpenCV...

This is an improved version from what I had in class some time ago... http://hi.cs.stonybrook.edu/cse-527
It runs VERY fast. No loops involved, all matrix operations. In C++ this code would look gnarly.. Thanks Numpy!

Enjoy!
Roy

Share

Take a SWIG out of the Gesture Recognition Toolkit (GRT)

Reporting on a project I worked on for the last few weeks - porting the excellent Gesture Recognition Toolkit (GRT) to Python.
Right now it's still a pull request: https://github.com/nickgillian/grt/pull/151.

Not exactly porting, rather I've simply added Python bindings to GRT that allow you to access the GRT C++ APIs from Python.
Did it using the wonderful SWIG project. Such a wondrous tool, SWIG is. Magical.

Here are the deets
Continue reading "Take a SWIG out of the Gesture Recognition Toolkit (GRT)"

Share

Aligning faces with py opencv-dlib combo

Face alignment with Dlib and OpenCV

This is my first trial at using Jupyter notebook to write a post, hope it makes sense.

I've recently taught a class on generative models: http://hi.cs.stonybrook.edu/teaching/cdt450

In class we've manipulated face images with neural networks.

One important thing I found that helped is to align the images so the facial features overlap.
It helps the nets learn the variance in faces better, rather than waste their "representation power" on the shift between faces.

The following is some code to align face images using the excellent Dlib (python bindings) http://dlib.net. First I'm just using a standard face detector, and then using the facial fatures extractor I'm using that information for a complete alignment of the face.

After the alignment - I'm just having fun with the aligned dataset 🙂
Continue reading "Aligning faces with py opencv-dlib combo"

Share

Build your AWS Lambda Machine Learning Function with Docker

I've recently made a tutorial on using Docker for machine learning purposes, and I thought also to publish it in here: http://hi.cs.stonybrook.edu/teaching/docker4ml

It includes videos, slides and code, with hands-on demonstrations in class.

A GitHub repo holds the code: https://github.com/royshil/Docker4MLTutorial

I made several scripts to make it easy to upload python code that performs an ML inference ("prediction") operation on AWS Lambda.

Enjoy!
Roy.

Share

An automatic Tensorflow-CUDA-Docker-Jupyter machine on Google Cloud Platform


For a class I'm teaching (on deep learning and art) I had to create a machine that auto starts a jupyter notebook with tensorflow and GPU support. Just create an instance and presto - Jupyter notebook with TF and GPU!
How awesome is that?

Well... building it wasn't that simple.
So for your enjoyment - here's my recipe:
Continue reading "An automatic Tensorflow-CUDA-Docker-Jupyter machine on Google Cloud Platform"

Share

Projector-Camera Calibration - the "easy" way

First let me open by saying projector-camera calibration is NOT EASY. But it's technically not complicated too.

It is however, an amalgamation of optimizations that accrue and accumulate error with each step, so that the end product is not far from a random guess.
So 3D reconstructions I was able to get from my calibrated pro-cam were just a distorted mess of points.

Nevertheless, here come the deets.
Continue reading "Projector-Camera Calibration - the "easy" way"

Share

Revisiting graph-cut segmentation with SLIC and color histograms [w/Python]

As part of the computer vision class I'm teaching at SBU I asked students to implement a segmentation method based on SLIC superpixels. Here is my boilerplate implementation.

This follows the work I've done a very long time ago (2010) on the same subject.

For graph-cut I've used PyMaxflow: https://github.com/pmneila/PyMaxflow, which is very easily installed by just pip install PyMaxflow

The method is simple:

  • Calculate SLIC superpixels (the SKImage implementation)
  • Use markings to determine the foreground and background color histograms (from the superpixels under the markings)
  • Setup a graph with a straightforward energy model: Smoothness term = K-L-Div between superpix histogram and neighbor superpix histogram, and Match term = inf if marked as BG or FG, or K-L-Div between SuperPix histogram and FG and BG.
  • To find neighbors I've used Delaunay tessellation (from scipy.spatial), for simplicity. But a full neighbor finding could be implemented by looking at all the neighbors on the superpix's boundary.
  • Color histograms are 2D over H-S (from the HSV)

Result

Share

[Python] OpenCV capturing from a v4l2 device

I tried to set the capture format on a webcam from OpenCV's cv2.VideoCapture and ran into a problem: it's using the wrong IOCTL command.
So I used python-v4l2capture to get images from the device, which allows more control.
Here is the gist:

Enjoy!
Roy

Share