One part per million
accuracy of gnss sensors are generally expressed as a fixed amount plus 1 or 0.5 ppm (rms). I recall these are the same values from 20 years ago.
Why are static and rtk accuracy for sensors different?
Where did the 1 ppm come from in RTK? and why different than 0.5ppm for static?
Lots of testing, I will have to say that the accuracy statements for the R-10 are valid from my experience.
RTK will be slightly less accurate simply because there is less data to process.
Static involves a lot of repetitive observations. Start with an understanding of statistics before trying to understand GNSS.
Paul in PA
I think it has to do with initialization.
The Static processor initializes every observation independently.
The Kinematic processor does one initialization for a series of observations. The initialization is valid as long as lock on the satellites is maintained.
edit: fixing iOS stupid edits.
An RTK accuracy estimate on a data collector is usually only a simple averaging of a lot of little data sets
I'm told that Javad averages the weighted value of each epoch. I wouldn't be surprised if that's true for others as well.