Activity Feed › Discussion Forums › GNSS & Geodesy › Solution for "Simple Question – RTK"?
Solution for "Simple Question – RTK"?
Posted by EFBURKHOLDER on September 21, 2014 at 11:36 pmFrom time to time I archive a thread on the Surveyor Connect bulletin board. Yes, I’ve archived the “Simple Question for RTK Users/Enthusers” because it contains some very good points along with “chatter.” If the discussion continues, I’ll need to up-date my download.
That thread struck me as really getting to the core of what we surveyors do and the judgements we make with respect to the quality of our results. In my opinion, there is a solution to the underlying issue that deserves consideration.
I’ve posted an item on the Global COGO web page and invite interested persons to read, study, comment, and improve on what I’ve included. My comments apply most directly to item (e) in the original post meaning the user has in-hand vector components and the associated covariance matrix for those baselines that can be formed into a network.
I’ll be happy to respond inquiries either on this board or sent directly to me.
EFBURKHOLDER replied 9 years, 6 months ago 7 Members · 9 Replies- 9 Replies
I have experienced similar results between two different manufactures post-processing software packages. For most of my career, I have used Trimble GPS/GNSS receivers and Trimble software to process data. 2 years ago, I switched gears and started using Topcon receivers and software. During a transition period while I was training myself on Topcon MAGNET software I had the opportunity to utilize Trimble Business Center as well. I would process data using MAGNET, adjust and repeat using TBC. The first thing I noticed were the standard errors for stations within a network derived from MAGNET processed baselines were higher than that same data processed using TBC. I would spend hours tweaking the data and settings in MAGNET to try and get something resembling what I was getting when using TBC. The positions were identical but confidence regions were different. Regardless of variable, I was unable to bring the standard error down to resemble the errors of that derived from a TBC processed network. After doing this several times on several different networks, I became convinced that TBC had a more “optimistic” way of computing an error matrix than did MAGNET. Using MAGNET, I would consistently see error propagation 1 1/2 to 2 times that of which TBC would produce. I have not documented my findings in any way that would be presentable to a manufacturer for analysis, but it has been something that became evident to be rater quickly after starting to use the Topcon package.
EF,
Link doesn’t work. I get this:
Dave
I import vectors to StarNet & I’ve had somewhat similar results as SOJ transitioning from Sokkia’s Spectrum Survey 3.40 to Topcon Tools w/ their “Advanced” module.
Spectrum 3.40 was fussy & quirky, had a very limited antenna modeling interface & would float maybe 10% of the baselines, but 3.40 & earlier had very powerful, precise, editing tools which could produce stunning vertical accuracy.
Tools w/ Advanced Module, has an incredible antenna modeling interface, rarely floats, a really compromised set of editing tools, fixes pretty much everything w/ zero editing, but vertical accuracy suffers. Tools specifies, I Think, a 2 mm xyz centering tolerance, & I used to spend a lot of time editing to improve its solutions, pretty much a fruitless exercise though.
The differences on the ground were never more than 1/2 centimeter & although it really hurt to give up the exceptional vertical results I’ve adapted procedures to accomodate.
Thanks for your patient work toward standardization Earl.
> EF,
>
> Link doesn’t work.
>
> DaveOpens fine for me (but I have a Mac, lol!)
Here’s it in plain ole text:Call for “standardization” in modeling stochastic values for geospatial data: Timely or Premature?
Earl F. Burkholder, PS, PE, F.ASCE
Global COGO, Inc. – Las Cruces, NM 88003
September 22, 2014
Introduction:
The issue of spatial (especially geospatial) data accuracy is becoming increasingly relevant as more and more disciplines and users worldwide use 3-D digital spatial data and as they make important decisions based on the known (or unknown) quality of those data.
The global spatial data model (GSDM) consists of a functional model (equations and geometry) and a stochastic model that is used to establish, track, and report the statistical characteristics (standard deviation) of any/all elements derived from the values stored in a BURKORDTM data base.
The Goal:
The goal is for all users to be able to start with the same data (RINEX or otherwise) and to be able to compute network/local accuracy of a network of GPS points and get the same (or nearly so) estimates regardless of the “brand” software being used to perform the computations. Currently one can start with the same RINEX files and obtain very similar ?X/?Y/?Z baseline components regardless of the vendor software being used. A least squares adjustment of these baseline components determines the adjusted coordinates for the network. Currently, the estimates of network and local accuracies computed from the covariance matrix of the results of the adjustment appear to be dependent upon the brand of software being used to determine the covariance matrix of the baseline components (from the RINEX data). The GSDM provides a framework for such standardization.
Drivers:
This particular effort is driven by:
? A discussion on the “Surveyor Connect” bulletin board in which users provide input with regard to how GPS data are used to estimate uncertainty (make sure you go to the top of the post).
? Efforts by the author to get a handle on the issues of network accuracy and local accuracy for spatial data.
? A request from NOAA in March 2014 for information on how to exploit the commercial value of the vast spatial data holdings of the agency. I responded.
? A need to organize material to be included in a planned Second Edition of the book “The 3-D Global Spatial Data Model: Foundation of the Spatial Data Infrastructure” by the author and published by CRC Press in 2008.
???????1
Observations, opinions, and subsequent research:
1. The discussion on the Surveyor Connect bulletin board includes a number of excellent points and some obvious opinions. A discriminating reader chooses what to believe.
a. I believe that open respectful discussion is healthy and aspire to contribute accordingly.
b. The original post identified six different options. My comments are directed to option 5 in which the user has access to the baseline vector components and the associated baseline covariance matrix. The user should get the same answer for the vector components regardless of whether those baselines were computed from RINEX data, obtained from a given RTK controller, or assembled from a mixture of baseline data collected at different times by different brands of equipment. If that stipulation leads to the conclusion that this discussion is premature, so be it.
c. Otherwise, I believe the procedures described in the following section can be used to put (or keep) many users on “the same page.”
2. The first edition of the 3-D book (Burkholder 2008) contains a discussion of and examples of network and local accuracies – see chapters 11 and 12.
a. Soler and Smith (2010) take exception to the material in the book in their article “Rigorous Estimation of Local Accuracy.”
b. In deference to their high level of technical insight, I was initially very worried that I had made an error, misinterpreted the concepts, or omitted important material from the book. But, the more I dug into it, the more I became convinced of the validity of the procedures as published in the 3-D book.
c. According to accepted technical publication standards, I wrote a “Discussion” of their article pointing out what I felt was a mistake on their part. In such a case, the authors are given an opportunity to write a “Closure.” My Discussion (Burkholder 2012) and their Closure (Soler and Smith 2012) are both published in the February 2012 issue of the ASCE Journal of Surveying Engineering.
d. Still not satisfied with the Soler/Smith explanation, I wrote a separate “rebuttal” which ASCE declined to publish. Plan B included filing that rebuttal with the U.S. Copyright Office and posting same on the Global COGO web site.
e. The rebuttal confirms the validity of my work. It also shows that Soler/Smith and I get the “same” answer for all 3 examples – short, medium, and long lines.
?3. But,
additional work on network/local accuracy involves changing the tolerance imposed on the “anchor” point of the GPS network. The rebuttal paper holds the standard deviation of the anchor point at 0.010 meters in each X/Y/Z direction. Subsequent tests were conducted for tolerances of 0.002 meters, 1.0 meters, and 5.0 meters. The point of additional tests was to show that network accuracy “followed” the degradation of tolerance at the anchor point while the “local” accuracy continued to be governed by the correlation (and the quality) of measurements between points.
a. Subsequent tests verified that hypothesis but raised additional questions.
b. Results of the Soler/Smith method do not agree with proven results when
standard deviations of 1.0 m and 5.0 m are assigned to the anchor point. 2
that is not the end of the story. The last paragraph of the “rebuttal” states that
c. Using different brands of software to compute baseline components and associated covariance matrices provided even different results – not good!
4. I make mistakes when performing these tests and computations. I have therefore checked and re-checked my work and hope that any discrepancies others find will be reported back to me. In the best case, any discrepancies others find will not adversely affect tentative conclusions. We’ll see.
a. The additional work performed is reported on the Global COGO web site.
b. The computational procedures are all “standard” and documented.
c. Trimble results appear to be the most consistent.
d. Thales and Topcon report baseline statistics as standard deviations and
correlations whereas the Trimble baseline statistics are variances and covariances. I converted the standard deviations and correlations to variances and covariances in order to make the comparisons legitimate.
e. Leica software is available in the surveying lab at NMSU. Attempts to compute baseline components and statistics from RINEX files have not been successful.
f. From one vendor to another, the network/local accuracy trends remain consistent with the hypothesis, but the magnitudes of the computed accuracies are, I think, significantly different (and unrealistic?).
g. Several vendors are looking at the results I’ve posted but, so far, feedback from the vendors has been minimal and guarded. What might it take for vendors to “buy in” to RINEX type consistency for stochastic results for spatial data?
5. Many issues being dealt with on the Surveyor Connect bulletin board (not just the one on RTK uncertainty) come under the GSDM umbrella in one way or another. That is particularly true for issues of map projections, grid/ground differences, basis of bearing, and many others. If NOAA, as a federal agency, could and would adopt a world-wide standard for digital spatial data and for spatial data accuracy it would eliminate many problems related to miscommunication among spatial data users. I hope to live a long time yet but I doubt such will happen in my life-time. But, that does not mean we should not try!
6. After sending the response to NOAA, I contacted CRC Press and suggested the time might be right for additional promotion of the 3-D book. Their response was to ask about preparing a Second Edition. We are working on that. A Second Edition will contain information on least squares adjustment, an expansion of the network/local accuracy material, additional examples, and more arguments as to the benefits of the entire spatial data community using an integrated model for 3-D digital spatial data.
7. When an entire network is based upon vectors computed by the same brand software, the resulting network and local accuracies should be legitimate. Two GPS network examples are posted on the Global COGO web site – one is based upon Trimble vectors and the other is based upon Topcon vectors. Both examples exhibit impressive results. A third example network on campus utilizing a mixture of brand vectors is still being computed and evaluated. That effort is currently frustrated due to making sure “apples” are being compared with “apples.”
a. Trimble network link is http://www.globalcogo.com/nmsunet1.pdf
b. The Topcon network link is http://www.globalcogo.com/3DGPS.pdf
???3
References:
Burkholder, E.F. 2012; Discussion of “Rigorous Estimation of Local Accuracies” by T. Soler and D. Smith, Journal of Surveying Engineering, Vol. 138, No. 1, pp 46-48.
Burkholder, E.F. 2008; “The 3-D Global Spatial Data Model: Foundation of the Spatial Data Infrastructure,” CRC Press – Taylor & Francis Group, Boca Raton, London, New York.
Soler, T. and D. Smith 2012; Closure for “Rigorous Estimation of Local Accuracies” by T. Soler and D. Smith, Journal of Surveying Engineering, Vol. 138, No. 1, pp 48-50.
Soler, T. and D. Smith 2010, “Rigorous Estimation of Local Accuracy,” ASCE Journal of Surveying Engineering, Vol. 136, No. 3, pp 120-125.
4> After doing this several times on several different networks, I became convinced that TBC had a more “optimistic” way of computing an error matrix than did MAGNET. Using MAGNET, I would consistently see error propagation 1 1/2 to 2 times that of which TBC would produce.
Was the discrepancy in error estimates still present after scaling the TBC errors to produce a standard error of unit weight of 1.0? In my experience with TBC (and TGO before it), the first pass of an adjustment fails the chi square test, and the error estimates have to be scaled and the adjustment rerun to get a SEUW of 1.0.
TBC also defaults the centering and HI errors to zero… you have to go in and make them something realistic for your ground stations.
> Was the discrepancy in error estimates still present after scaling the TBC errors to produce a standard error of unit weight of 1.0? In my experience with TBC (and TGO before it), the first pass of an adjustment fails the chi square test, and the error estimates have to be scaled and the adjustment rerun to get a SEUW of 1.0.
I would only use the manufactures software to process the data, using plain vanilla settings. I would export the data out as vectors and run minimally constrained adjustments in Star*Net. Comparisons from Topcon fully constrained and scaled to Trimble fully constrained and scaled or minimally constrained to minimally constrained made no difference. I would apply the same weighting strategy across the board.
The computational procedure I use is documented at at this web site.
The least squares algorithm is item #2 on that web page and actual matrices (with numbers) is included in item #3. If you scroll down you will find the reference variance computed to be 2.144. I did not “scale” it.
Dave,
I’m sorry the link did not work for you. The pdf file is posted at on my web site.
If that does not work, let me know and I’ll email the pdf file direct. My email address is splattered all over my web site so I don’t mind providing it here.
[email protected]Happy computing!
Log in to reply.