ISSN 2011-0146

Generative ebook covers

publicado en general,programación,tipografí­a,visualización por mga en October 10, 2014

Este post fue publicado inicialmente en los blogs de NYPL. Versión en español pronto.


Finding better covers for public domain ebooks

Here at NYPL Labs we’re working on an ebook-borrowing and reading app. On the technical side, Leonard Richardson is doing all the back end magic, consolidating multiple data sources for each book into a single concise format: title, author, book cover and description. John Nowak is writing the code of the app itself (that you will be able to download to your phone). I am doing the design (and writing blog posts). Many of the ebooks we will be offering come from public domain sites such as Project Gutenberg. If you spend a few minutes browsing that site you will notice that many of its ebooks either have a really crappy cover image or none at all:

PG cover PG cover PG cover

Book covers weren’t a big deal until the 20th century, but now they’re how people first interact with a book, so not having one really puts a book at a disadvantage. They are problematic, and not only in ebooks. It’s difficult to find high-quality, reusable covers of out-of-print or public domain books. There are some projects such as Recovering the Classics that approach this problem in interesting ways. However, we at NYPL are still left with very limited (and expensive) solutions to this problem.

Given that the app’s visual quality is highly dependant on ebook cover quality (a wall of bad book covers makes the whole app look bad) we had to have a solution for displaying ebooks with no cover or a bad cover. The easy answer in this situation is doing what retail websites do for products with no associated image: display a generic image.

iTunes no cover S&S no cover Abrams no cover

This is not a very elegant solution. When dealing with books, it seems lazy to have a “nothing to see here” image. We will have at least a title and an author to work with. The next obvious choice is to make a generic cover that incorporates the book’s title and author. This is also a common choice in software such as iBooks:

iBooks cover

Skeuomorphism aside, it is a decent book cover. However, it feels a bit cheesy and I wanted something more in line with the rest of the design of the app (a design which I am leaving for a future post). We need a design that can display very long titles (up to 80 characters) but that would also look good with short ones (two or three characters); it should allow for one credited author, multiple authors or none at all. I decided on a more plain and generic cover image:

NYPL cover 1

Needless to say this didn’t impress anyone; which is OK because the point was not to impress; we needed a cover that displayed author and title information and was legible to most people and this checked every box… but… at the same time… wouldn’t it be cool if


While discussing options for doing a better generative cover I remembered 10 PRINT, a generative-art project and book led by Casey Reas that explores one line of Commodore 64 (C64) code:

10 PRINT CHR$(205.5+RND(1)); : GOTO 10

This code draws one of two possible characters (diagonal up or diagonal down) on the screen at random, over and over again. The C64 screen can show up to 40 characters in a row. The end result is a maze-like graphic like the one seen in this video:

At the 2012 Eyeo festival, Casey Reas talked about this project, which involves nine other authors who are collected in this book. I highly recommend watching Reas’s presentation (link jumps to 30:11 when 10 PRINT is mentioned). The two characters–diagonal up and diagonal down–come from the C64 PETSCII character list which is laid out here on the Commodore keyboard:


Each key on the PETSCII keyboard has a geometric shape associated with it. These shapes can be used to generate primitive graphics in the C64 operating system. For example, here is a rounded rectangle (I added some space to make it easier to see each character):

5 3 3 3 9

2 0 0 0 2

2 0 0 0 2

a 3 3 3 b

In terms of the letters on the same keyboard, that rectangle looks like this:

B   B
B   B

10 PRINT was the starting point for my next ebook cover generator. In 10 PRINT a non-alphanumeric character is chosen by a random “coin toss” and displayed as a graphic. In my cover generator, a book’s title is transformed into a graphic. Each letter A-Z and digit 0-9 is replaced with its PETSCII graphic equivalent (e.g. the W gets replaced with an empty circle). I used Processing to quickly create sketches that allowed for some parameter control such as line thickness and grid size. For characters not on the PETSCII “keyboard” (such as accented Latin letters or Chinese characters) I chose a replacement graphic based on the output of passing the character into Processing’s int() function.

Colors and fonts

In order to have a variety of colors across the books, I decided to use the combined length of the book title and the author’s name as a seed number, and use that seed to generate a color. This color and its complimentary are used for drawing the shapes. Processing has a few functions that let you easily create colors. I used the HSL color space which facilitates generating complimentary colors (each color, or hue in HSL parlance, is located in a point on a circle, its complementary is the diametrically opposite point). The gist code:

This results in something like:


To ensure legibility and avoid clashes with the generated colors, I always use black on white for text. I chose Avenir Next as the font. The app as a whole uses that font for its interface, it’s already installed on the OS and it contains glyphs for multiple languages.

There are more (and better) ways to create colors using code. I didn’t really go down the rabbit hole here but if you feel so inclined, take a look at Herman Tulleken’s work with procedural color palettes, Rob Simmon’s extensive work on color, or this cool post on emulating iTunes 11’s album cover color extractor.


I created a function that draws graphic alternate characters for the letters A-Z and the digits 0-9. I decided to simplify a few graphics to more basic shapes: the PETSCII club (X) became three dots, and the spade (A) became a triangle.

I wrote a function that draws a shape given a character k, a position x,y and a size s. Here you can see the code for drawing the graphics for the letter Q (a filled circle) and the letter W (an open circle).

My cover generator calls drawShape repeatedly for each character in a book’s title. The size of the shape is controlled by the length of the title: the longer the title, the smaller the shape.

Each letter in the title is replaced by a graphic and repeated as many times as it can fit in the space allotted. The resulting grid is a sort of visualization of the title; an alternate alphabet. In the example below, the M in “Macbeth” is replaced by a diagonal downwards stroke (the same character used to great effect in 10 PRINT). The A is replaced by a triangle (rather than the club found on the PETSCII keyboard). The C becomes a horizontal line offset from the top, the B a vertical line offset from the left, and so on. Since the title is short, the grid is large, and the full title is not visible, but you get the idea:


There is a Git repository for this cover generator you can play with.

Some more examples (notice how “Moby Dick”, nine characters including the space, does fit in the 3×3 grid below and how the M in “Max” is repeated):





And so on:

Douglass Aesop

The original design featured the cover on a white (or very light) background. This proved problematic, as the text could be dissociated from the artwork, so we went for a more “enclosed” version (I especially like how the Ruzhen Li cover turned out!):

Doctorow Li Justice

We initially thought about generating all these images and putting them on a server along with the ebooks themselves, but 1) it is an inefficient use of network resources since we needed several different sizes and resolutions and 2) when converted to PNG the covers lose a lot of their quality. I ended up producing an Objective-C version of this code (Git repo) that will run on the device and generate a cover on-the-fly when no cover is available. The Obj-C version subclasses UIView and can be used as a fancy-ish “no cover found” replacement.

Cover, illustrated

Of course, these covers do not reflect the content of the book. You can’t get an idea of what the book is about by looking at the cover. However, Leonard brought up the fact that many Project Gutenberg books, such as this one, include illustrations embedded as JPG or PNG files. We decided to use those images, when they are available, as a starting point for a generated cover. Our idea is to generate one cover for each illustration in a book and let people decide which cover is best using a simple web interface.

I tried a very basic first pass using Python (which I later abandoned for Processing):


This lacks personality and becomes problematic as titles get longer. I then ran into Chris Marker and Jason Simon’s work, and was inspired:

Marker & Simon

I liked the desaturated color and emphasis on faces. Faces can be automatically detected in images using computer-vision algorithms, and some of those are included in OpenCV, an open-source library that can be used in Processing. Here’s my first attempt in the style of Marker and Simon, with and without face detection added:

no cv cv

I also tried variations on the design, adding or removing elements, and inverting the colors:

no cv line cv line no cv inverted

Since Leonard and I couldn’t agree on which variation was best, we decided to create a survey and let the people decide (I am not a fan of this approach, which can easily become a 41 shades of blue situation but I also didn’t have a compelling case for either version). The clear winner was, to my surprise, using inverted colors and no face detection:

flatland ten girls procopius

The final Processing sketch (Git repo) has many more parameters than the 10 PRINT generator:

Image Cover P5


As with many subjects, you can go really deep down the rabbit hole when it comes to creating the perfect automated book cover. What if we detect illustrations vs. photographs and produce a different style for each? What about detecting where the main image is so we can crop it better? What if we do some OCR on the images to automatically exclude text-heavy images which will probably not work as covers?

This can become a never-ending project and we have an app to ship. This is good enough for now. Of course, you are welcome to play with and improve on it:

BodyType v0.1

publicado en arte,diseño,interacción,kinect,programación,tipografí­a por mga en March 4, 2012

Create fonts by waving in thin air!

I have finally found some time to build a semi-standalone binary (Mac OS X 10.6 or better) of BodyType, my Kinect-based font creation software. This version supports only UPPERCASE A-Z and 0-9. If you want to create more glyphs, let me know. The download includes a README.txt file that, as its name indicates, you must read. BodyType v0.1 is dependent in other libraries and programs in order to create your fonts properly and that file explains how to install them. They all should be installed with MacPorts.

The code is also available for download in Google Code.


This project was done as part of the requirements to complete the Spring 2011 Interactive Art and Computational Design course with Professor Golan Levin in Carnegie Mellon University.

BodyType was built with openFrameworks and makes use of ImageMagick, FontForge and Potrace.

Presentando Stereogranimator

publicado en arte,diseño,historia,interacción,programación,web por mga en January 26, 2012

GIF made with the NYPL Labs Stereogranimator - view more at
GIF made with the NYPL Labs Stereogranimator

Publico esta foto hecha con el Stereogranimator como abrebocas para un post que publicaré más adelante sobre mi trabajo en la Biblioteca Pública de New York.

detalles coquetos

publicado en diseño,interacción,web por mga en January 22, 2012

Leyendo un post comentando sobre la compra de Summify por parte de Twitter llego al sitio web del primero:


Notarán el video que se insinúa arriba. No es un error. Ese detalle motiva a “buscar” el video, ya sea haciendo scroll vertical (que no funciona, pero sería elegante que sí) o dando clic en play. Al dar clic aparece el video:

Me pareció una curiosa forma de “mostrar el video sin mostrarlo” sacando provecho de la tendecia que tenemos a completar las formas.

Body Type

publicado en arte,diseño,interacción,kinect,programación,tipografí­a por mga en June 29, 2011

Update 2: Download a semi-standalone binary (Mac OS X 10.6 or better) of the project. This version supports only UPPERCASE A-Z and 0-9. Working on a Latin-1-friendly version.

Update: I have released most of the code for this project so that you can take a look. Send any improvements! ;)

A typeface is an environment for someone’s expression: using typefaces we write documents, create posters and subtitle movies. We follow the type creator’s restrictions and design decisions (kerning, spacing, proportion) when using her fonts. Good typefaces are created by highly-skilled people and can take several years to create. Body Type aims to allow anyone to create a highly expressive and personal typeface using only their body and hand gestures.

Body Type is an excercise on freestyle computing and was built using OpenFrameworks, the Microsoft Kinect sensor, OpenNI, FontForge, Potrace, ImageMagick, PHP, TouchOSC and the Apple iPhone.


This project builds on a previous exercise on generative art that made use of the Kinect OpenNI skeleton system to create letterforms. I decided to take the idea further and create a self-contained application that would allow anyone to create a TrueType font using their body and gestures. The software should allow people to create type using their body in a way evocative of light painting photography:

love more.

Technical information

Body Type was created using the Microsoft Kinect sensor which was controlled by a C++ application created using openFrameworks and its OpenNI extension. This allowed for the 3D input received from the sensor to be converted into variable-width outlines and skeletons. The next issue was creating a TrueType font (.TTF file) from these silhouettes. The font-generation process goes through several stages in order to achieve this:

  1. Using openFrameworks to generate a black and white bitmap representation of each letter.
  2. ImageMagick is used to convert the image to a format compatible with FontForge.
  3. FontForge makes use of Potrace to vectorize the bitmap and then generates the final letterform and font.
  4. Since Body Type was displayed in the STUDIO for Creative Inquiry as part of an exhibition, web functionality was added so that font-creators could send themselves the resulting files. PHP was used to create a compressed ZIP file of all image and TTF files and send it to the specified email address.

The created font has six different possible parameters to determine a letterform’s visual attributes: skeleton lines, skeleton dots, line width, upper body, lower body and hand trails. Since controlling these parameters would be quite complicated to do by using gestures or on-screen menus, a TouchOSC iOS overlay was created to allow remote control via Open Sound Control:

Body Type iPhone console


During the STUDIO exhibition dozens of fonts were created. Below are samples of some of them.

meta 2
By Terry Irwin

shawn sims outline
By Shawn Sims

heather knight flower alphabet
By Heather Knight (dingbats)

chinese whispers
By Cong Ma (chinese characters)

faste bold
By Haakon Faste

Creating the font is just part of the process. A font is made to be used. This project acknowledges the limitations of the fonts created both technically (since they lack proper letter spacing) and alphabetically (since they contain only representations for letters A to Z and numbers 0 to 9). However, these fonts allow for “freestyle” graphic experimentation:

By Paulo Pinto

By Juan Carlos Cammaert

Body Type
By Mauricio Giraldo Arteaga

Further work can be explored creating complete characters sets.

This project was done as part of the requirements to complete the Spring 2011 Interactive Art and Computational Design course with Professor Golan Levin in Carnegie Mellon University.

siguiente página »