If his perception is unreliable as you claim it is, then no amount of checking and rechecking his data will make any difference because his perception of his results is unreliable as well. He will forever be trapped in a rabbit hole like a blind man with no hope of accuracy or correction. There is no way around this. We've been over this before and you have learned nothing. Moving along now.
This is wrong, and reveals a fundamental ignorance about how science is done and why it "works".
Consider a very simple example: Scientists want to measure the width of a particular sheet of paper.
One way to do this would be for a bunch of scientists to guess at the width. One says "It looks like it might be about 20 cm wide". The next holds up his hands and says "I think 20 cm is too much. Maybe 15 cm is about right." A third eyeballs it and says "I think the first guy is closer to the mark. Let's say 19 cm."
Now, we have three "eyewitnesses" trying their best to judge the width of the sheet of paper. Are their perceptions reliable? How can we tell? Well, one way is to compare how far off they are from one another's estimates. So perhaps we try this:
average over the three eyewitnesses to get a "best estimate" of the width of the paper: (20 + 15 + 19)/3 = 18, so the sheet might be 18 cm wide, based on eyewitness accounts.
But how certain are we about this? Well, the
range of the eyewitness "measurements" goes from 15 cm at the low end to 21 cm at the upper end, which is a 6 cm range. It might be reasonable to quantify this "random variation" in the estimates using half of this range: 3 cm. These three scientists could then say "Our best estimate, judging by eye, is that the width of the paper is (18 plus or minus 3) cm.
Note that MR's first claim in the quote above is
wrong. Here, we
accept that
all three scientists' perceptions of the width are unreliable. But it does
not follow that "no amount of checking and rechecking will make any difference". Notice what we have done: we have
quantified just how "unreliable" we think the perceptions are. The combined data from these three scientists has a
quantified uncertainty of "plus or minus 3 cm". Also, we might expect that if we called in 10 more independent scientists to repeat the "measurement", that the result will
still come out in the range (18 +- 3) cm.
Now, this expectation might prove to be correct, or it might prove to be incorrect. All we can do is to collect the data and analyse it. In the process, we might find that the "best" (average) measurement changes, and perhaps the range of uncertainty will also change. Suppose we get 100 independent scientists to eyeball the paper, and we do the analysis and find that the width is now (20.5 +- 1.0) cm. The "best" estimate has gone up a bit, but adding more data has reduced the estimated uncertainty from +- 3 cm to +- 1 cm.
Notice that, even though everybody's perception is unreliable, some "checking" has made a difference, contrary to MR's naive claim that no checking could ever make a difference and there's "No hope of accuracy or correction". In science, there is almost
always something that can be done to improve accuracy and to reduce uncertainty.
We're not done yet. Looking over our new dataset from 100 scientists, we notice some things. Two guys out of the 100 gave
wildly different answers to the 98 others, it turns out. One guys said "It looks like the paper might be 20 metres wide". The other guy said "It looks like it could be 3 cm wide". These may be what are called
outliers - data that doesn't seem to "fit" all the other observations. With some investigation, maybe we can track down possible reasons for these outliers and so correct or disregard them as likely errors. For instance, upon re-interviewing guy who said 20 metres, we might discover that "Oh, I
meant 20
centimetres! My bad." We might decide that this guy is too error-prone to trust his eyewitness testimony and so disregard it. Or, if there's sufficient evidence, we might
correct the "mismeasurement", by asking him to estimate it again perhaps. On the other hand, on interviewing the second guy, we discover that he's off his meds and acting strangely, so we might decide that the safest thing to do would be not to include his 3 cm estimate in the data set (leaving a note in the research paper to that effect, of course).
After correcting for the outliers and excluding unreliable data, we now find that the width of the paper is (20.5 +- 0.9) cm, say. Are we done, now?
No! If we were so inclined, we could
repeat the experiment, preferably with an independent new group of 100 scientists and fresh researchers to compile and analyse the data. Of course, all the same "errors" due to human limitations in perception would still apply to this new measurement. Let's say the result, after analysis, is (22 +- 1.2) cm for the width. Now what?
Well, the first thing to notice is that our two independent data sets
agree with one another! Specifically, widths in the range (20.8 cm to 21.4 cm) are covered by
both of the independent data sets. So, maybe our "combined" result from 200 observations is now (21.1 +- 0.3) cm, which we find by averaging the 20.5 and 22 cm results and taking half of the "combined" range. (There are more sophisticated statistical analyses that could be done, but this gives you an idea.)
At this stage, look at what has happened since the very first "experiment". We went from an initial estimate of (18 +- 3) cm to a new estimate of (21.1 +-0.3) cm. The uncertainty has decreased by an order of magnitude since the first experiment, which must be a good thing (also something MR falsely claimed was impossible!) Notice that
nothing changed about the accuracy of human perception or the general reliability of eyewitnesses. Are we done now?
No! Is getting a bunch of scientists together to "guesstimate" the width of piece of paper
really the best way to determine the width? Eureka! Of
course not! What if we gave each of our 200 scientists a calibrated
ruler, which they could then use to measure the width? Spending a few bucks on a trip to the stationery store, we acquire 200 cheap(ish) rulers and set them to work. The good news is that each one of these rulers has markings down to 1 millimeter, which means that each scientist should, in principle, be able use the scientific equipment to measure the width to an accuracy of around +- 0.5 millimetres, or +/-0.05 centimetres. This offers a potentially big improvement over just eyeballing and guesstimating. After the 200 have done the job and we have analysed the data, we now find a width of (21.001 +- 0.002) cm, let's say.
Notice that this result, too, is
consistent with the earlier "best guesstimate" of the scientists, because 21.001 cm is in the range (21.1 +- 0.3) cm. This is good news. It increases our confidence that the scientists'
perception of lengths in cm is not fundamentally inaccurate, compared to the answers we get using calibrated rulers. Of course, things might not have turned out that way. If the new measurement was
outside the initial "guesstimate" measurement, then we would have to consider a number of possibilities: (1) the scientists perceptions of width are more flawed than we thought they would be; (2) the rulers used in the second experiment were not properly calibrated; (3) our analysis of one or both experiments is incorrect for some reason; (4) there was something
systematically wrong with the procedures used to collect data in one or both experiments; (5) superhuman aquatic alien species somehow screwed one or both experiments; (6) there is a previously-undiscovered natural cause that could account for the discrepancy in results.
Where would we go from here? Answer: design and perform
more experiments or analysis to try to confirm which, if any, of explanations (1) to (6) above, accounts for the observed discrepancy. In the process, we might learn some very interesting things about (1) human perception; (2) ruler and/or paper engineering; (3) statistical analysis; (4) robust experimental protocols and controls; (5) we are not alone!; (6) new physics of paper (or space or light, etc.).
----
The take-away lesson from this, kids, is that Magical Realist's assumption that the scientific method has no mechanisms to explain or compensate for the unreliability of eyewitness perception or testimony is fundamentally, naively and hopelessly wrong (hopeless in MR's specific case, because MR has proven to be uneducateable when it comes to learning how science is done or why it is useful).
"How does any of this apply to the analysis of UFO reports?", you might be asking.
Well, just a
few of the lessons we might take away from this simple example include: (1) measurements and observations of unknown phenomena, where possible, should be quantified; (2) we should always do our best to evaluate how
reliable any given data set is (from one eyewitness, or three, or two hundred); (3) we should
investigate possible causes for misperceptions, miscalibrations, systematic flaws in perception, etc. and try, as much as possible, to control for them in future experiments or observations; (4) we should spin multiple working hypotheses and not simply assume that our first ideas must be correct or the best possible way to do things; (5) we should respect the power of scientific inquiry to get to the truth, through iterative improvements, inbuilt self-corrective mechanisms and objective methods.