Recently, with the spread of on-vehicle video, it became common to share
the on-vehicle video on website. These videos can be used for frequently
and widely updating of virtual city space such as Google Street View, if
the location of the videos are available. To estimate the location, it is
required to match the uploaded on-vehicle video to the video with location
taken by a probe car.
In this paper, we propose the efficient matching method using Temporal
Height Image (THI), Affine SIFT and Bag of Feature.
THI retains information of relative building height from temporal image
sequence, and extract robust features by Affine SIFT. We realize efficient
matching by expressing their features using Bag of features. We conducted
experiments to show the efficiency of the proposed method by real image
sequences of the city.
Publications
- K. Fukumoto, H. Kawasaki, S. Ono, H. Koyasu, K. Ikeuchi
"On-Vehicle Video Localization Technique based on Video Search using Real data on the Web", International Journal of ITS Research, 2014.5
- K. Fukumoto, H. Kawasaki, S. Ono, H. Koyasu, K. Ikeuchi
"On-Vehicle Videos Localization using Geometric and Spatio-temporal Information", ITS World Congress, 2013.10
- K. Fukumoto, H. Kawasaki, S. Ono, H. Koyasu, K. Ikeuchi
"Matching technique between the space-time where the plural in-vehicle camera pictures for own car position estimates are effective", the eleventh ITS symposium, 2012.12
|