This is Part 3 of this series. (At least I didn’t pull a Lucas and start with part 4.)
Again, there may be more and these are not orthogonal or exclusive of each other.
Now, remember that the context in which I’m comparing these two “methods” are in trying to photographicaly capture the entire world at a human-level POV! So imagine an online experience where you can go to a website (e.g. EveryScape and Google) and be able to walk around just like you were there. Yep. BIG idea.
So, being able to scalably capture, store, distribute, share, etc. the whole world is tantamount. If you can’t do this, then game over man.
Companies like EveryScape, Google, Earthmine, Mapjack, Immersive Media and bunch others found a way to (cost) effectively drive around cities with car-mounted cameras. Especially EveryScape and Google have done this scalably in multiple cities all round the world with thousands of miles of coverage. (I’m sure there are others but I haven’t seen this much quantity of their content published yet.) I think this is proof enough for me to say that panoramic images can scalably cover the world.
Photosynth has not quite done this yet. I’ve seen pretty extensive number of photographs used to represent a landmark or an area, but I have not yet seen an entire city done this way yet. There are lots of brilliant minds at this, I’m sure, and it does feel feasbile. But if content publication is the standard…
Panorama 1, Photosynth 0.
By this, I mean folks online can easily view and experience the content. Again, going to everyscape.com or maps.google.com is proof enough. Using Flash (and Flash did “change the world” in this sense) or Silverlight, users can experience the content, and the backend seems to have been implemented well.
Oh BTW, SeaDragon‘s f’in brilliant!
Panorama 2, Photosynth 1.
We live in a dynamic world. Things change all around us. Tomorrow, a Starbucks could turn into a Dunkin Donuts (yes, I’m from the east coast). By maintainability, I mean that these changes in the real world could easily be reflected in the mirror world online.
In any type of changes in the real world, we (EveryScape) have a “self healing” backend, so only real work is photo acquisition. Assuming all other car-mounted systems are similar, this is technically solved.
For Photosynth, it seems like a similar approach will work. Although there may be some ownership issues with Photosynth (if crowd sourced), it feels quite easy to make this assumption of maintainability.
Panorama 3, Photosynth 2.
Panorama 4, Photosynth 3.
Overview of Technical Characteristics
It seems like the main tech difference between Panoramas and Photosynth is the scalability. One main issue with Photosynth is the image registration / pose estimation problem and how scalable this can be. Basically, for each image added to the synth, features are detected, then corresponded to the rest of the point cloud, then a relative camera extrinsics are computed. (Apologies for the tech lingo.) I’m not fully convinced that this is the way to go when scaling up to what I want (da world!). Perhaps supplementing the image with GPS and other sensors is a good way to solve this. BUT, if the philosophy for Photosynth is still automation, consumer cameras, and crowd sourcing, I’m not sure I quite believe in scalability (yet).
Is scalability issue overcome-able for Photosynth? I think yes. Just need to see it to believe it.