As I think about the progression of things towards “free”, I keep running into instances where “if only they added another” becomes ever more likely/practical.
I was trying to take a good photo of my baby girl as she sat in my lap today. I was using my iPhone and everyone knows how hard that can be to take a good shot with. An earlier piece pointed out how eventually, which the ever cheaper price of displays, that they’d have displays on both sides of the phone and in this instance, I would have been able to see what I had in the viewfinder. But I couldn’t.
The photo didn’t come out all that great, but as I emailed it to my wife back at home, I thought for a moment that not only did the photo show little of my daughter and I, but it didn’t show much of what was around us. So my wife had no way of knowing where we were.
So I almost took a picture of what we had been looking towards at the time of the photo, which happened to be Treasure Island here on San Francisco Bay. But when I stopped myself from doing that, I realized that with the decreasing price of camera lenses, why doesn’t the phone just do it by itself?
The new iPhones have cameras on both sides, so that should be trivial, right? Just a software thing. But what about other cameras? SLRs? Doesn’t have to be the same high-end lense/photo, but why not just add a bunch of smartphone cameras to all sides of the camera? That way you could get context for your photos. The camera could somehow store them in a way that software on my computer could connect the two. Just as they have GPS coordinates embedded in photos, why not photos of what was on either side or behind me?
Never know what you’ll see as things move closer to free.