1. Having trouble logging in to the new RPLS Community website?

    Welcome to the new RPLS Community website. SurveyorConnect may be gone but the community is still here and going strong!

    Due to the new website URL, you may have to login to your account again. It's the same login as you had before, but your browser likely won't recognize it, even if you set it to remember you. If you can't get logged in, you can reset your password. Be sure to check your spam or junk folder if you don't receive the confirmation message right away. If you still have trouble, send us an email at support@rplstoday.com.

    [ CLICK HERE TO RESET YOUR PASSWORD ]

    Dismiss Notice

Photogrammetry using Google Street View Imagery

Discussion in 'Photogrammetry, LiDAR & UAS' started by Kent McMillan, Jan 8, 2017.

  1. Kent McMillan

    Kent McMillan 7-Year Member

    Joined:
    Jun 30, 2010
    Messages:
    11,058
    Likes Received:
    1,928
    Location:
    Austin, TX
    Yes, it does, but since the uncertainties in directions are being bootstrapped from the adjustment, the standard errors don't have fairly good apriori values.
     
  2. Kent McMillan

    Kent McMillan 7-Year Member

    Joined:
    Jun 30, 2010
    Messages:
    11,058
    Likes Received:
    1,928
    Location:
    Austin, TX
    As it turns out, the Google Street View imagery is complicated by the fact that it is a composite of about eight wide-angle images that have been stitched together. I assume that the Google camera is oriented so that four of the lenses are pointing either parallel with the direction of travel of the camera car or at right angles to it and the other four fill in the quadrants between them.

    The test project that I described in my earliest posts was lucky in that the image used was just a part of the image from the forward-looking lens and didn't overlap onto adjacent images used in the composite. So that was probably why it worked like a single frame image. In the case of the image of the building on West 6th Street in Austin that I posted above, that image is most likely a composite stitched together from two different ones and can't be treated as a single frame.
     
    paden cash likes this.
  3. paden cash

    paden cash 7-Year Member

    Joined:
    Jul 1, 2010
    Messages:
    7,709
    Likes Received:
    5,642
    Location:
    The Great State of Okie Homie
    Licensed in:
    OK
    When I was with the highway department we were split into two divisions, ground and aerial. My office just happened to be the last one in the "ground" side and most of my prairie-dog-cubicle neighbors were with "aerial". I picked up a lot of info from contact 'osmosis'.

    Photo analysis is close to witchcraft. At the time we were just being able to "rubber band" the digital images to "fit" the visible control. Everybody in aerial was convinced it was the neatest thing since sliced bread. And at times it worked. And then we eventually started seeing the dirty underside of manipulating images too much. Basically we determined you can wiggle it all around and 'fit' every control point with amazing results...and still be way off in other parts of the image.

    From what I understand now, things have moved on to some amazing results. 20 years ago was just the beginning of digital image analysis.

    As fancy as we were back then I still had to hand copy our results for the digital level loops through our aerial targets. I got a call from one of the photogrammetrists saying she couldn't focus on one of the targets (on a flat bridge deck) by about 0.18'. When I went over everything I discovered I had transposed an elevation when I had hand copied the list. I was amazed someone could detect an error less than two tenths by stereo comparison from a focal altitude of 4000'.

    There is a science there but you would probably have better results if you just had an old Kodak image of the block rather than the 'Frankenstein' Google image.
     
  4. Kent McMillan

    Kent McMillan 7-Year Member

    Joined:
    Jun 30, 2010
    Messages:
    11,058
    Likes Received:
    1,928
    Location:
    Austin, TX
    One of the coolest software tools I use is Global Mapper. SOP for many investigations dealing with historical conditions of a tract of land 40 to 60 years ago is aquiring modern digital orthophotos (free from TNRIS for sites in Texas) and using them to control the rectification of aerial photos downloaded from the archives (also free of charge) of the US Geological Survey. Obviously, terrain displacement can be a major factor, but in relatively flat country (no need to mention any states that come to mind), the whole business works without surprises.
     
    paden cash likes this.
  5. Kent McMillan

    Kent McMillan 7-Year Member

    Joined:
    Jun 30, 2010
    Messages:
    11,058
    Likes Received:
    1,928
    Location:
    Austin, TX
    My latest hypothesis is that to get what are very nearly single-frame images from Google Street View, you compute the heading of the camera car from the successive latitudes and longitudes of the images in the sequence before and after the one of interest and enter that heading (in decimal degrees) into the "___h" part of the URL line of your browser with yaw set to 90, i.e. "90y" in the same URL. A refresh should bring up an image with the forward or backward line of travel at the center of the field of view, camera dead level.

    Then, stepping the "____h" component of the URL in 45° increments should bring up images with the center of the field of view of one image in the center of the Street View image, but with stitched margin showing some distortions. Some angular distance from the center of the image to where the stitching begins, though, should (hypothetically) be a single-frame image.
     
  6. Kent McMillan

    Kent McMillan 7-Year Member

    Joined:
    Jun 30, 2010
    Messages:
    11,058
    Likes Received:
    1,928
    Location:
    Austin, TX
    Here's an example showing that same building on West 6th Street in Austin. all of the images are taken from nominally the same camera station, i.e, at the same lat and long, but exactly 45° different in orientation, beginning parallel with the direction of travel of the camera car (which I computed as 288.18° True from successive lats and longs in the image sequence along the street:

    Heading = 288.18°
    https://www.google.com/maps/@30.269...4!1suBVIbtAztjtMfMoJ78qq3Q!2e0!7i13312!8i6656

    Heading = 243.18°
    https://www.google.com/maps/@30.269...4!1suBVIbtAztjtMfMoJ78qq3Q!2e0!7i13312!8i6656

    Heading = 198.18°
    https://www.google.com/maps/@30.269...4!1suBVIbtAztjtMfMoJ78qq3Q!2e0!7i13312!8i6656

    Heading = 153.18°
    https://www.google.com/maps/@30.269...4!1suBVIbtAztjtMfMoJ78qq3Q!2e0!7i13312!8i6656

    Heading = 108.18°
    https://www.google.com/maps/@30.269...4!1suBVIbtAztjtMfMoJ78qq3Q!2e0!7i13312!8i6656
     
  7. Kent McMillan

    Kent McMillan 7-Year Member

    Joined:
    Jun 30, 2010
    Messages:
    11,058
    Likes Received:
    1,928
    Location:
    Austin, TX
    Except the "___y" component of the URL line is evidently field of view. and the "______t" line is the zenith angle of the camera orientation.

    At first impression "45y" (45° field of view) looks like a good standard setting for retrieving components of the Google Street View image that don't have any stitch lines and can probably be treated as single-frame images.
     
    paden cash likes this.
  8. Kent McMillan

    Kent McMillan 7-Year Member

    Joined:
    Jun 30, 2010
    Messages:
    11,058
    Likes Received:
    1,928
    Location:
    Austin, TX
    One thing that I've discovered from examining Google Street View images is that you can verify the number of separate images that were stitched together to make the panorama just by checking the frequency of stitch artifacts in the panorama. If there were eight camera images, then there should be a stitch every 45 degrees.
     
  9. Nick H

    Nick H Member

    Joined:
    Sep 5, 2016
    Messages:
    5
    Likes Received:
    11
    Interesting.

    Google has had at least two different optical systems in wide-spread use, one used fish-eye lenses, one didn't. See: Google Street View: Capturing the World at Street Level

    Metric Localization using Google Street View might also be worth a read if you haven't seen it already - sounds like you're not the first person to try and do this kind of thing. It also mentions an API for directly retrieving rectilinear projections of the street view images. I think they're referring to the Street View Images API. It's pretty well documented, and if you get an API key you should be able to get images directly if you want them:
    https://developers.google.com/maps/documentation/streetview/intro
     
  10. Kent McMillan

    Kent McMillan 7-Year Member

    Joined:
    Jun 30, 2010
    Messages:
    11,058
    Likes Received:
    1,928
    Location:
    Austin, TX
    One thing that I need to investigate further is the datum to which the lat and long position tags in Google Street View refer. To tes
    This first impression hold up, but the "45y" setting actually returns an image from Street View with about a 76° field of view, not 45°.
     

Share This Page