Conceptual sketches for a paper-prototyping environment using a pen-tablet and handwriting

Posted on February 16, 2010


Paper prototyping (from a list apart)

Mind map (from mind map inspiration)

Database diagram (from SVGopen)

Electronic diagram (from Sirbot)

Handwriting recognition

The most natural way to paper prototype is on paper. The directness of eye-hand-pen to conceptual sketch is still much greater than working with a mouse in environments like Visio.

In this paper prototype I aim at an application specifically for use with a pen-tablet (including touch as a possible secondary function).

Conceptual sketches

Bewlow you will find the paper prototypes of an application that allows you via a step by step translation of hand drawn diagrams to create meaningful (semantically tagged) diagrams and structured conceptal texts.

As my field is software development, I will sketch the process from simple diagrams to a working application, using hand-drawing on a pen-tablet as the main source of input. The use cases possible are not limited to this.

Building an application by drawing? I AM NOT A PROGRAMMER!

Stick with me. When you conceptualize using mind-maps or any other way of visualization, you can use the same principles to:

  1. Write a story with multiple chapters
  2. Write a thesis with different topics, your organize and extend via object models and mind maps
  3. Do paper brainstorms (private and public ones)
  4. Describe and create workflows
  5. Play with physics and objects and build games by drawing shapes which are then assigned to the physics engine

Each of the objects in your sketches can be expanded (opened, zoomed into) into new depths, each representing new details.

Phase 1: defining the basis

1: The basic diagram

This design is sketched using a pen-tablet.

2: The software recognizes the shape as polygon after drawing it

Instead of assuming your polygon has a specific shape, the software makes suggestions and asks which shape your things fits best.

3: Example of two shapes

To keep contact with the original drawing, the “cleaned up” objects are transparant and still showing the hand drawn shapes.

4: Telling the software to connect lines to objects

By drawing circles on the lines, overlapping the objects at a specific point, the software is told where to connect the lines.

5: Visual feedback

6: End result


The steps as described above can be done DURING and AFTER hand-drawing the diagram. It depends on the preferences of the user.

Each object (a line, a circle, a polygon) is stored as a separate object when drawing.

8: Defining the type of object

The object can be of a specific type, assigning meaning and specific rules to it. (For instance: an entity in a Entitity Relationship Diagram, or an action in a workflow model)

9: Editing text

The text writting inside the object can be translated to characters real-time or afterward. The position (overlapping the object) makes the app assume the text is part of the object.

10: Relationships

The connection / line between two objects can describe a relationship. In the image, the most basic is shown: Parent / Child. The parent/child relationship can be automatically assumed from the direction the lines are drawn. They are relevant to describe the flow of your diagram.

11: Adding annotations

To describe the meaning of each objects, annotations are added.

12: Translating an annotation

There are two ways the annotations can be translated: real time and afterwards. This sketch shows the transaltion to be done afterwards. The handwritten text is translated via software.

13: The result

Using the same approach as described in step 4: the annotations are connected to the objects.

When using the diagram in the next phase of the design-process, the annotations can be accessed via the objects.

14: Adding more notes

Repeating steps 11 and 12, handwritten notes are added to the diagram and translated to text. The user can user a (virtual) keyboard to type, correct errors in the translation or change / add text.

15: The result

Each note is translated to text.

16: Connecting the notes

As notes can be inter-related, they can be connected using lines as well. The steps are the same as described for connecting objects and adding annotations.

The layer with the hand drawn diagram is hidden in this example.

17: The clean end result

This clear end result can be shared with other people, embedded in a document and so on. As each object, line and text is defined (representing an object of a specific type, representing a relation of a specific type, belonging to a specific object, connected in a specific way) the semantic description (XML or relational model in a database) can be used to generate new things. More about that later.

18: Adding post-its

It is possible I want to add extra notes which will not be in the eventual publication. Each of the “post its” can be either typed or be hand written.

Phase 2: layering a new diagram on top

In the next example, I layer a new diagram on top of the one described above. It is a more detailed description of each object and – in this case – could be a database-table or the description of a Class in my application.

19: Drawing the shapes

In this illustration you see shapres which could represent entities for an Entity Relationship model, database tables or objects I will translate into classes for a computer application.

The lines describe the relationships between specific items in the lists in the objects.

20: The result

Again, using the steps as described in 11 and 12, I connect items – in this case the object-labels in the lists in each box.

21: Cleaning up the lines

In some cases it is preferable to avoid connectors (lines) to run over the boxes. In general, pathfinding routines are used to make the lines run BETWEEN the boxes. (My example here is still “dirty”, but it is the general concept that counts)

22: Connecting the boxes to the objects in the other drawing

To extend the conceptual models (and since they ARE connected to each other) the user can connect each box to an object on another layer. Possible descriptions and rules connected to the other objects can be inherited by the boxes in this way.

23: The clean end result of the second iteration

24: With interited annotations

Since we already made and attached annotations to each object in step 12 and 13 and the boxes are connected to the objects in the first drawing, we can choose to display them here too and add extra references to the objects.


As we defined the boxes and the relationships between items, the next possible steps is to use the result definition (a semantic description of the diagram in XML or relational model in a database) as the input for a generator that automatically creates a database with tables.

Phase 3: sketching a user interface

25: Drawing the screen

The sketch made on the pen-tablet shows the most rudimentary setup. In the next steps they are further defined.

26: Defining the objects

The shape is rectangular. The software offers objects most resembling that shape: an input-box for text, a dropdown or a button. The option to choose what shape to use is either after drawing the shape, or when the object with the hand-drawn shape is clicked on later. In this example the latter is the case.

27: Defining the objects #2

In this example, the place-holderd for the labels (light grey) are already placed.

The shape we have selected here is almost square. This can implicate the shape is either an image, a check box, a text-area or a radiobutton.

28: Linking the containers to objects

To understand which object is represented by the form, a connection can be made to the objects drawn in phase 1. Implicitly also the tables drawn and attached to the object in Phase 2  (in this example anyway) are connected to the forms and objects as well.

29: Linking objects to items in a table

As the form is connected to an object and to an object a table, the builder can offer the table the form represents. By drawing lines between the form fields and the database-fields, a connection is made between the two.

30: Defining sub-items

As drop-down boxes usually contain data from another table, the fields from this table can be connected to the dropdown box. To limit the scope of this example, the process in which the dropdown itself is defined is not shown. Also additional filters to limit the selection within the dropdown box is left away.

31: Setting labels

For labels on the form, the same principle is used: from a list, connections are drawn to the label place-holders. The list with labels can be multi-lingual. Each language is another “layer” in the “labels” box, addressed when the end-user chooses another language to present the form in.

32: Adding post-its

To add extra information in the design, for other people, post-its can be added.

Also, additional text can be written on and over the design and connected in logical blocks as shown in steps 14, 15 and 16.

33: The clean result

This image shows the clean result – without the content.

As we designed the backend part in the shape of tables and assuming we are using a framework that allows you to build database driven applications without programming code (The Flash RAD framework is one of those) this is basically a working form in a possible desktop or web-application.

Closing notes

These conceptual sketches showed the step by step process of building a database driven application – using a pen-tablet and software that helps you translating the drawings into semantic ojects – in three phases:

  1. Defining the objects and relationships between the objects
  2. Defining the entity relationship model / database model
  3. Defining the front end application

Vectors, text and structure

The created objects are in the end nothing more than vector-shapes, text and structures in which each of these vector-shapes and text-objects are organized and interlinked.

Output: SVG, XML and XHTML

The output of such an application would be best to be XHTML for the text / documents and SVG for the drawings.

  • SVG (Scalable Vector Graphics) is a standard that can be displayed in a web browser, but also can be translated into a vector drawing in Java, Flash and any other programming language. SVG is also a format that can be used as a standard for inport in other applications.
  • XHTML + CSS can be rendered in a browser, but also be used as a standard for conversion into Word-documents and PDF format.
  • XML is a standard for data-packaging. It is readable by any environment as it is clear text. The XML will contain the semantic definitions of the diagrams and references between objects in the same layer/diagram and objects in other layers/diagrams.


Not shown in this paper prototype are the different layers used to build / write / sketch the different elements. The basic layers I have in minde are those:

  1. Diagram / drawing layer
  2. Annotation layer
  3. Remarks-layer
  4. Connections layer

Templates and a rules-based engine

As the basic principle for each possible use of an application like this, the software would be rules-based.

Rules can be easily defined in XML, describing the basic workings of each object used in the drawings (text, shapes, lines, connections).

Shapes can be defined in SVG, offering you the freedom to use whatever software to create them and any software to use / present them for drawing

Templates contain the total of all definitions and how they are used and presented in the appliction.

Possible uses and extentions

As mentioned at the top of this post, the paper-prototyping software can be used to make software applications, electrical diagrams, define workflows and write entire documents, using hand-written content as primary input. Taking it many steps further, it could be used for prototyping Desktop-publishing documents, and word processing combined with manual note-taking.

Using the keyboard and mouse

The keyboard and mouse have very specific use-cases and within these use-cases they beat the pen-tablet. For instance, handwriting-recognition may work now, but is still far free from making mistakes in guessing what you wrote. Writing long documents is also putting a strain on the muscles of your hand. More than typing does. Typing can be faster than handwriting as well, if you are trained.

The mouse is simpler for input than the tablet in some cases. Dragging and moving objects or nodes of an objects might be easier to do with the mouse than with a pen (I can not judge yet which is easier as I am at this moment (februari 2010) a newbie with the tablet.