Tag Archives: Photography

Commoditization of Car-Mounted Immersive Imagery

PixKorea Car

PixKorea Car. They use digital SLRs with fisheye lenses. The mechanical rig can change heights. Cool.

Sorry it’s been a while since my last post.  I’ve been busy with family, work, my dog ate my home work, had to wash my hair…

I was invited to give a talk at a conference in Seoul, Korea called National Spatial Data Infrastructure Expo 2009 (Sep. 9-11).  I spoke about “How to Paint the World,” which stressed on the importance of a framework for capturing, processing, storing and distributing photorealistic, immersive, interactive content of our world for various applications (e.g. local search).  Ok, that sounded fancier than it actually was (or perhaps more boring than it was?).

But that isn’t the gist of this blog.  I wanted to write about how pleasantly surprised I was to see so many street-level, car-mounted camera aquisition systems in the show floor of the expo.  I think I saw at least 5 companies doing that when I walked around half the show floor, with various configs and cameras.

Car-mounted ground-level immersive imagery

This car uses Point Grey's Lady Bug (red on top) as well as digital SLRs on the bottom. Not exactly sure why.


This car uses Point Grey's Lady Bug and two GPS's to determine orientation. When asked how well that worked, the answer was ambiguous.

In general, I am seeing a bunch of companies being formed that have a car-mounted system for street-level panoramic acquisition around the world.  I’m glad to see this, since it feels like another step towards this content type being useful and in demand.  But ultimately, street content will be commoditized (even before it can be monetized — but that’s whole other topic).

So, what does this mean?  Well, it means the consumers win in the long run.  It also means that the competition will hopefully improve the image quality of this exterior content (really lead by Google Street View).  Further differentiation and innovation needed to win in the competitive market will push the innovative minds to do a lot more than just display panoramas — enabling mashups, UGCs, improving extensibility and maintainability, encoding a whole lot more geo info, getting INTERIORS (a-hem!) etc. will be necessary for survival.  As I said, this should all be good for the consumers, if it pans out this way.  Yay.

It’s still a bit early to tell who, how, what will win or lose.  And somewhat surprisingly (to a US-centric person), Google is not winning else where around the world.  Yay.

Panoramas vs. Photosynth (Part 3): Technical Characteristics

Photographically capturing the WORLD!

Photographically capturing the WORLD!

This is Part 3 of this series.  (At least I didn’t pull a Lucas and start with part 4.)

Let’s compare technical characteristics/requirements as I’ve mentioned in Part 1 (and pls read Part 2 as well).

  • scalable
  • distributable
  • maintainable
  • extensible

Again, there may be more and these are not orthogonal or exclusive of each other.


Now, remember that the context in which I’m comparing these two “methods” are in trying to photographicaly capture the entire world at a human-level POV!  So imagine an online experience where you can go to a website (e.g. EveryScape and Google) and be able to walk around just like you were there.  Yep. BIG idea.

So, being able to scalably capture, store, distribute, share, etc. the whole world is tantamount.  If you can’t do this, then game over man.

Companies like EveryScape, Google, Earthmine, Mapjack, Immersive Media and bunch others found a way to (cost) effectively drive around cities with car-mounted cameras.  Especially EveryScape and Google have done this scalably in multiple cities all round the world with thousands of miles of coverage.  (I’m sure there are others but I haven’t seen this much quantity of their content published yet.)  I think this is proof enough for me to say that panoramic images can scalably cover the world.

Photosynth has not quite done this yet.  I’ve seen pretty extensive number of photographs used to represent a landmark or an area, but I have not yet seen an entire city done this way yet.  There are lots of brilliant minds at this, I’m sure, and it does feel feasbile.  But if content publication is the standard…

Panorama 1, Photosynth 0.


By this, I mean folks online can easily view and experience the content.  Again, going to everyscape.com or maps.google.com is proof enough.  Using Flash (and Flash did “change the world” in this sense) or Silverlight, users can experience the content, and the backend seems to have been implemented well.

Oh BTW, SeaDragon‘s f’in brilliant!

Panorama 2, Photosynth 1.


We live in a dynamic world.  Things change all around us.  Tomorrow, a Starbucks could turn into a Dunkin Donuts (yes, I’m from the east coast).  By maintainability, I mean that these changes in the real world could easily be reflected in the mirror world online.

In any type of changes in the real world, we (EveryScape) have a “self healing” backend, so only real work is photo acquisition.  Assuming all other car-mounted systems are similar, this is technically solved.

For Photosynth, it seems like a similar approach will work.  Although there may be some ownership issues with Photosynth (if crowd sourced), it feels quite easy to make this assumption of maintainability.

Panorama 3, Photosynth 2.


Panorama 4, Photosynth 3.

Overview of Technical Characteristics

It seems like the main tech difference between Panoramas and Photosynth is the scalability. One main issue with Photosynth is the image registration / pose estimation problem and how scalable this can be.  Basically, for each image added to the synth, features are detected, then corresponded to the rest of the point cloud, then a relative camera extrinsics are computed.  (Apologies for the tech lingo.)  I’m not fully convinced that this is the way to go when scaling up to what I want (da world!).  Perhaps supplementing the image with GPS and other sensors is a good way to solve this.  BUT, if the philosophy for Photosynth is still automation, consumer cameras, and crowd sourcing, I’m not sure I quite believe in scalability (yet).

Is scalability issue overcome-able for Photosynth?  I think yes.  Just need to see it to believe it.

Panoramic Equipment

I’ve been taking panoramic images  for over 10 years, and I’ve been using various gears for taking them — cameras, lenses, rotating heads, tripods, and GPS.  Curious about what I use now?  Here’s my list.


My lens of choice is Sigma 8mm fisheye lens.  In general, I prefer the fisheye lens since the field of view is very wide, i.e. need to take less amount of pictures to cover the full 360 x 180 degrees; i.e. faster.

The optics is quite good, and we’ve had very few of them fail.  There’s some chromatic aberration, but typically stitching software takes care of that.

Sigma 8mm Fish Eye Lens

Sigma 8mm Fisheye Lens


My camera brand of choice is Canon.  We’ve tried Nikons but they failed a lot more for us under extreme conditions (ask me if you’re curious).  I currently use Canon T1i, which has a 1080p video recording capability. Awesome camera.

Canon T1i

Canon T1i

Because the T1i is not a full frame digital SLR, when used with the Sigma 8mm, the circular fisheye image is cropped.  But I actually prefer the crop for better “resolution” of the scene.

Panoramic Tripod Head

My choice for panoramic tripod head is Nodal Ninja R1.  It’s light, compact, sturdy, and precise. Also, because the mount attaches to the ring-mounted lens (see images below), you don’t have to worry about messing up the focus of the lens — I initially had some trepidation about this, but not any more.

Nodal Ninja R1 Ring-Mounted Camera

Nodal Ninja R1 Ring-Mounted Camera

So there.  What do you use to take your panoramas?  Care to share?

A9 and Streetside: Why Did They Fail?

Amazon A9 Maps

Amazon A9 Maps

Before EveryScape and Google Street View existed (and yes, we were doing this before Google was), there were a couple of attempts of street-level photography by companies you might have heard of: Amazon and Microsoft.

Amazon had their A9 Block View (shown above) and Microsoft  had (has?) their Streetside.

My question is: Why did they fail?

In some sense, their intensions were the same as EveryScape and Google — to enable users to virtually see places, businesses, points of interest from the comfort of your browser for various use cases and applications.

One obvious “feature” difference is that they did not use panoramic imagery.  One could argue that panoramic imagery is more immersive and experiential.

Does panoramic imagery make that much of a difference?  Isn’t one of the beauties of the Web is that “keep it simple, stupid” wins?

Or are the users really looking for richer online experiences?  A better UI/UX (a la Apple and iPhone)?  Were their approach limiting feature wise?

More questions than answers, unfortunately.  Facts or biases, your feedback is appreciated.

HDR Part 2: Exposure Fusion

This blog is the second part of the previous blog on high dynamic range imagery.

Exposure Fusion (a.k.a. Enfuse) does not use HDR.  But it is related in a sense that it uses multiple exposures to create a nice “fused” image.  (So technically, “part 2″ is a bit misleading.)

Exposure Fusion was a paper by Mertens, Kautz, and Van Reeth in 2007, and you can learn more about the work here.  This technique basically bypasses HDR creation all together to create a wonderfully fused image.

Let’s just briefly discuss some issues with HDR (I will discuss some benefits of HDR in the next blog).  HDR “assembly” takes quite a bit of processing time and the file sizes bloat up big time — which also means longer time to load to any programs like Photoshop to do anything to it.  From there, you typically end up tone mapping the image anyway.  And don’t get some folks started on the pain-in-the-ass-ness of tone mapping.  Yeah, it generally sucks when you end up doing a lot of them by hand.

Exposure Fusion basically says, “that’s bullsh!t!” There’s no need to convert a bunch of files to something you won’t use, then have to convert again, only to spend the next 2 hours tweaking some parameters you don’t understand, that was named by some ivory-tower researchers (sorry guys ;-)). Exposure fusion just creates a wonderfully “fused” image from your multiple-exposure set, which is the part I really like.

So, gettin’ down to the brass tax, if you have a hard time going from HDR, then back to LDR using some tone mapping operator that doesn’t understand you, then use Enfuse.  It’s one of the most consistent way to create an image from multiple exposures.  And, it’ll save you time and lots of disk space.

One caveat is that Enfuse is a command line tool.  If you don’t like that, you can find some GUI wrapper programs out there (e.g. Bracketeer).

High Dynamic Range Imagery (part 1) — What the Heck Is It?

So, you’ve heard some folks talk about “high-dynamic range imagery” or HDR, and you think you sort of understand it, or not really?  Well then, I hope to de-mystify it for you in a series of blogs.

In my previous blog, I talked (or more like bitched) about why cameras suck; and one of my reasons was that they lacked sufficient dynamic range in capturing light.

“…, let’s talk about dynamic range.  This is the whole problem of the images above.  We’re only stuck with 0-255 per RGB channels.  This means that we need to describe the brightness of what we see — from dark shadows to sunlight — within the integer range of 0 to 255.  Even RAWs don’t cover it since the dynamic range needed to describe what we see could be 0-1,000,000.  Yes, cameras suck.  There are high-dynamic range imagery, and I will talk more about that soon.”

As shown above, there are series of photos that describe this problem.  In the beach shot 1, you see that the exposure was very long, so most pixels are washed out, but you can still see some contrast in the dark parts of the palm trees as well as the dark shadows on the sand ridges.

As the series of images get darker, e.g. beach shot 2 and beach shot 3, you can see the scenery much better, but the bright area around the sun is still too washed out.  By beach shot 6, we can see some outline of the sun better but everything else is now too dark.

Somewhere in this series of variously-exposed images lies the “right” answer for the composite image — this is the dynamic range problem.  Our eyes can see much better contrast than any of these camera shots (also because we can dynamically adapt better too, but that’s some other blog).  You can imagine manually photoshopping these images to get the solution image you want.  A more automated way is to create a single high-dynamic range image from this series of images, then tone map it.

Putting aside the technical lingo bullsh!t, I hope I’ve convinced you that there is a way to combine these images somehow to get the final image you want. (And I won’t bore you with the tech details either — if you must know, let me know pls.)

There are nice software products to do just this: Photoshop, Photomatix, and Enfuse.  There are more, but these are the ones I like.  (If you have your favorite, please comment and share!)

Beach shot "solution" using Photoshop

Beach shot "combined solution" using Photoshop

Beach shot "combined solution" using Photomatix

Beach shot "combined solution" using Photomatix

I’m not showing Enfuse just yet since it really isn’t HDR.  But I’m gonna stop right here for now, since the blog is getting too long.  I will talk more about Enfuse and more of HDRI-related issues in part 2 of this series.

Why Do Cameras Suck (Compared to Our Eyes)?

Why can’t the cameras capture what we see how we see?  In most cases, my digital camera pictures don’t correspond well with what I am actually seeing.  For instance, when I look out the window, I can see the bright outsides as well as the insides fine with my own eyes. BUT when I take a picture, I’m either stuck with a blown out window or dark interior.

Why the f#@! is this, you may ask?  More technically in this case, it’s because cameras lack the dynamic range of our eyes.  More universally, it’s because cameras suck compared to human eyes (and we don’t even have the best eyes in the animal kingdom!).

My point is that cameras have a long ways to go before we can start capturing images that fall within the standards of what we actually see everyday.  Isn’t that the point?  I feel like we got stuck with the limitations of technology, and seem to have forgotten the whole point of reproducing how and what we see.  Yeah, cameras are getting better all the time, but not fast enough and not close enough yet.

So what are some “parameters” of the camera we can improve?  There’s been a lot of research in optics, vision, perception, etc., but I’m writing a blog and not a research paper (thank goodness it won’t be boring — hopefully).  I’m going to only talk about the following parameters:

  • Resolution
  • Focal length
  • Dynamic range

There’s definitely more, and I found a nice link here with interesting data.

As for human eye resolution, it is about ~600 mega pixel resolution.  Man, cameras are not even close to that!  Sure, you can create a panorama, but I’m talking about doing this in a single shot (or at 30 fps!).

Focal length wise, the article puts our eyes at about 16-22 mm.  Basically, our eyes can see a lot all around.  Do a simple test: put your hands out straight, wiggle your fingers, then start to move your arms to the opposing sides while looking straight.  I can see to about 180 degrees horizontally.

Finally, let’s talk about dynamic range.  This is the whole problem of the images above.  We’re only stuck with 0-255 per RGB channels.  This means that we need to describe the brightness of what we see — from dark shadows to sunlight — within the integer range of 0 to 255.  Even RAWs don’t cover it since the dynamic range needed to describe what we see could be 0-1,000,000.  Yes, cameras suck.  There are high-dynamic range imagery, and I will talk more about that soon.

These things basically mean, to me anyway, that our typical camera lenses are not wide enough, we need a lot more resolution, and needs a lot more dynamic range.

Are there more parameters we can hope for?  Of course!  What do you wish for in the next gen camera?

Arc de Triomphe Photography with My iPhone

One way to get more resolution or field of view is to create a panorama — take more photos and put them together.  My previous two posts have been about this, and am following up with a few more examples of Arc de Triomphe in Paris.

As I’ve mentioned before, I used AutoStitch on my iPhone 3G S.  Much of the panoramas were an experimentation of adding some time and positional elements, which resulted in pretty cool stitched photos.

To get what I call the time element, I stayed in the same place a few minutes waiting for dynamic elements of the scene to change — e.g. cars, people, clouds.  By doing this, things that are static remain more solid and things that move have a ghost-like quality to them.

To get what I call the positional elements, I tried to focus on a feature as I walked along a path.  In these examples, I focused on the Arc while moving towards it.  This tends to create an impressionist-painting-like effect.

Paris, the Visual City

More Paris

Paris shot from inside a car

More often than not, I find that a typical field of view of a camera is not enough.  Using my iPhone 3GS, I cannot help but create panoramas and mosaics to get a wider field of view or to tell a “story.”

I am visiting Paris on a biz trip and below are some examples of mosaics I shot using AutoStitch app on my iPhone.  If you have not been to Paris yet, two words for you:  You must.  It is such a visual city with incredible amounts of beautiful sites all around.

Rue du Nil, Paris.

Ghosts walking on Rue du Nil, Paris.

Preparing for some car-mounted photography in Paris

Preparing for some car-mounted photography in Paris

Dinner in Paris

Dinner in Paris



What About the Inside?

About a year ago in 2008, I had the privilege to speak in front of the Where 2.0 2008 audience.  The topic was titled, “What About the Inside?” (Online video link here.)  I’m bringing this up again since the question is still valid — in some sense much more valid now than before.  What was the context?  It was a question meant not only for the Where 2.0 audience, but also meant for the GIS, mapping, GPS, 3D/2D imaging, government/civic, architectural and urban planning, and any community of people who were doing any work that gathers information and interacts with interior places and spaces we live.

The premise was that we have an unnatural bias towards mapping the outside spaces more so than the inside.  Perhaps it has some historical significance, where outside spaces were the final frontier.  Well, that’s no longer true, since much of the earth’s outdoors has been mapped, explored, marked.  GPS, satellite and aerial imaging, smart software algorithms (e.g. image registration), and the Moore’s Law definitely changed the landscape of how we can gather information for the outside.  Now we even have ground-level imagery from Google, EveryScape, etc., and we can zoom around places virtually.

But what about the INSIDE spaces?  I surely don’t want to stop at just the entrance to a place I want to visit.  I want to go INSIDE!  Perhaps the right question is: How can I experience of the inside spaces of every interior space I’d like to visit?  Yeah, it’s a big question.  And in this context, by “I,” I mean for every single person who uses the internet.  There are definitely issues such as privacy, no GPS, no aerial photos, multiple floors/levels, etc. that question existing technologies and solutions, since they don’t seem to “map” that well for the interiors.

Please check out my Where 2.0 2008 presentation video and I would love to hear your thoughts.  And below is the clip of an EveryScape eye-candy commercial.


Get every new post delivered to your Inbox.