Modeling without blueprints, from pictures

Modeling an object is the art of recreating it into a digital, 3D space. No matter how accurate you want your model, no matter how fast you’ll be working, it will always all about these famous 3D.

Some objects are simple, so simple you can create them from scratch by just studying it and taking measurements. Take a look around, you’ll see plenty of this kind of objects. But also many difficult objects, with elaborated shapes. Put your ruler back into your schoolbag, it won’t be of use this time.  The main known solution to model a difficult object is blueprints : they aren’t blue anymore like architect drawings 50 years ago, we’re talking of orthogonal views of a picture : front, rear, side, and top views, usually. Using them, you can locate in our 3d space any specific point of the object.


But for some objects, there is just no blueprint. What have we left ? Pictures. It’s easy to have pictures of the object you want to model. You can take them yourself with your digital camera, or you can grab tons of it on the www. Some people can use them to create a model only by eye, but it requires some serious attention because of perspective. Perspective make closer things bigger, so the proportions can’t be trusted without correcting perspective.

Here comes in the approach I wanted to introduce : using a dedicated software to get rid of perspective.

How does it works ?

Consider our 3D space, where the object stands, and a set of cameras around it. Each camera took a picture. Put each picture in front of its camera, on a transparent sheet. Look from the location of the camera, the picture will perfectly match the object behind it (if you don’t consider the optical distortions induced by camera defects)

Consider a specific point on your object. E.g. a corner of the windshield, or the center of a rim. You can see this point from some of your cameras, so it shows up on some of your pictures. You can draw a straight line doing from a camera to this point : it will pass through the related picture (standing in front of the camera on its transparent sheet, remember ?) precisely on the location where this points stands on the picture. It’s the notion of projection of the real world on the picture.

insight3d works using this projection, by reverse computing the projection. Give it the pictures, give it a set of specific points on your model. It will be able to compute the location of the different cameras who shot those pictures.

Once the camera located, it can compute the 3D location of the specifics points, by throwing lines from the camera location to the location of each point on each picture. For a given specific point, all the lines will converge into the real place of this reference point on the model.

So you’re able to compute a 3D model from a set of pictures. Of course it’s a simplified 3D model : e.g. I only used 171 specific points. But it will be just a basis to draw a better spline cage.

Using insight3d

insight3d is a Free Software, available for free at for both Windows and Linux. Once installed and executed, it shows up with its unusual interface :


We’ll start straight away. I assume you already gathered the reference pictures of your object. Mine is a car, a concept car only shown on a couple of Motor Shows and for which no information no blueprint is available. So I’ll add my first picture, using the menu File > Add Image. Tip : you can zoom the image using scrollwheel and pan it using middle button.


Use the “Points Creator” button on the left to activate the tool allowing you to place the reference points on our first picture.They’re represented by little crosses :


You can see on the picture that a little popup with the zoomed-in portion of the image under mouse cursor to help you place the points more precisely. It will conveniently auto-disappear if you zoom in enough. Nice attention from the developer.

Add another picture now, it will appear but the reference points doesn’t show up. You have to place them. But insight3d must be able to do the connection with the ones you placed in first picture. To do so, use the PageUp and PageDown keys to go through the list of existing points. They will appear on the left of the mouse cursor, conveniently allowing you to see all the occurences of a given point in all the pictures you already add. The picture below shows 2 things on the popup : the zoom and the sample of the previous picture showing this point.


When moving the mouse over a cross, the popup shows again to display the same point in all the previous picture.For example, the previous picture shows two thumbnails in popup : the location in the previous picture of the reference point we’re about to place in this picture, and the zoomed portion of this picture under mouse cursor. When you’ll have placed this point, the popup window will auto jump to the next reference point. Nice attention from the developer, again.

It’s now time to place more and more points, to get enough data so insight3d can compute the camera locations and triangulate the points. You can try that whenever you want, by using the menu item “Calibration” > “Automatic calibration”, and then the item “Modelling” > “Triangulate user vertices”. Once these two steps performed, insight3d should display green dots near the crosses, to indicate where it computed the position of our references points from the reconstructed model.


You may have noticed that the reference point on the right (near the popup) has no green dot. That means, the current amount of data (number of pictures and number of points) wasn’t enough for insight3d to compute the position of this reference point in 3D space. It’s easy to fix that, simply by adding more and more points, on more and more pictures. As an example, my final set is using 171 points on 15 pictures for the real project about this concept car.


Also, the green dots may be slightly off the cross. This means insight3d discovered an incoherent location of the reference point on this picture : the green dot shows the place where the reference point should be, according to insight3d. It may be right or wrong, because the computation it performed depends on the precision of your work (that means, how precisely you located points on all the pictures) Don’t hesitate to zoom in to locate points really carefully, it will help insight3d to reconstruct a more accurate model. Another good thing is to use many pictures. Theoretically, 2 pictures are enough to perfectly locate a point in 3D space, because there will be only one intersection of the 2 lines. But as placing a point on a picture perfectly is not possible, many pictures leads you to a beam of lines, to compute an average position, much more reliable because the errors are minimized.

Exporting the work to Blender

First, you export the model from insight3d. I used the VRML format, by using the menu item “File” > “Export VRML”. Then, into Blender, use the menu “File” > “Import” > “X3D & VRML 97” (the last entry in Blender 2.49b) Do not use the “File” > “Import” > “VRML 1.0”, it won’t work. When done, a cloud of points (vertices) shows up in Blender :


It’s too bad but insight3d didn’t export the two-points polygons, so you’ll have to recreate the edges. Enter Edit mode (Tab key) and create edges by selecting two vertices and press F key. When done, you’ll have the following (this picture shows a work-in-progress version of my project, so there is less than 171 points)


Because insight3d doesn’t know about the ground and the natural orientation of our model, it appears randomly oriented. So we have to align it properly. For that purpose, I use the yellow edge in the following picture :


This edge goes from the center of a wheel to the other one. So it defines horizontal and rear-end orientation. I use this property to align the model. Also, pay attention to the middle vertices (shown in the white line below), they should be aligned as much as possible.


Here’s the full model (171 dots) once properly orientated, untouched from insight3d except for a mirror modifier : you can clearly see the proportions looks right, and the shapes are accurate. It’s only up to you to use more and more points, to extract more and more details from the reference pictures and to bring them to your 3D model. For example, here’s the current look of my project (as of writing) : I just used some more points, and added details into Blender :


Some extra features within insight3d

Automatic reference points : insight3d features the ability to discover reference points automatically, based upon a picture pattern recognition. This better applies with close pictures, without few changes from one picture to the others. The tutorial available on the insight3d website presents this feature for buildings, and there it works well. For my example, using various points of views and several backgrounds behind the model (different motor shows), it didn’t work. So I had to create all my reference points by hand. Make sure to take a look at the tutorial on insight3d website if you want to try it.

Polygons : insight3d allows you to create polygons, to be exported to Blender. I don’t use the feature that much, except to create lines (2 vertices polygons) to follow the body lines of the car I was working on. The key tool for that purpose is the “Polygon creator” button on the left side. When activated, you can select a number of reference points, and a polygon is shown using the points you selected. You can confirm this polygon and move on creating the next one by pressing the Enter key. At any time, you can circle backwards through all the created polygons using the BackSpace key, and use the menu item “Edit” > “Erase current polygon” to delete the polygon shown in pink.

Cameras : insight3d also supports cameras export, but unfortunately I didn’t achieve a useful result. The exported data for cameras (location, rotation) doesn’t match the imported model. insight3d does support exporting to other file formats, for example the .rzi files used by Image Modeler, a similar program by AutoDesk. Opening the .rzi file into ImageModeler (export by insight3d) didn’t work either. I guess there are different coordinate systems for the different exports available from insight3d, but I didn’t try to read the source code to understand the relationships between these coordinate systems.

Save often : There are some minor bugs with the current version of insight3d, 0.3.2. Even if it’s a small version number, it’s already quite stable. However don’t forget to save often your project.

One final though

One critic about insight3d, about feedback & support.

During my project, I encountered several bugs. An annoying one is : the project file doesn’t support the use of spaces in a folder names (at least in the Windows version I used) I tried to reach the developer about that, but I get no reply from him. He looks like he doesn’t reply to incoming emails about his software and a comment on BlenderNation announcement of insight3d confirmed my fear. Let’s hope this article can cause him to change his mind !

Picture credits

The first picture of Lamborghini Sesto Elemento (shown twice in this article) was found in Wikipedia, author is Alainrx8. The two others pictures of Lamborghini Sesto Elemento were shot by Thomas Durand, known in the modeling scene as AMV12. Congratulations to both of them for these nice pictures.

6 thoughts on “Modeling without blueprints, from pictures

  1. Hi Tom,

    I really really need a piece of information from your regarding your Ariel Atom project. I have been trying to build the car using your Blender art as the source since 2009. I finally learned how to manipulate Blender so that i can get the lengths and angles needed from the chasis but 1 serious have 1 show stopper; I can’t tell what 1 Blender measuring Unit represents in real life according to your settings. PLEASE help me out on this one so i can proceed with this project.

    Thanks a million,

  2. @maadiah
    I’m afraid I can’t, I abandonned Linux as my primary desktop loong ago (mainly because of games, shame on me) Just using it for servers now. insight3d looks clearly not maintained anymore. I heard once of a fork called insight3Dng (next gen) but the project page on looks pretty low on activity too :

    @Erick Jimenez

    Scaling between Blender and the real world is just a matter of choice. There’s no predefined scale. I always use 1 blender unit = 1 meter.


    You can send me a message at THOMAS DOT BARON AT GMAIL DOT COM if you’re still interested.

  3. I’ve been looking for something like this for months. If it turns out to be what I hope, this is gonna solve so many problems like dealing with old and or sparsely covered image sets. The next best and expensive thing I could find was iWitness for forensic scene measurements.

    Thank you!

Leave a Reply

Your email address will not be published. Required fields are marked *