Thoughts on the Testing Process

Before I get into laying out the results of the tests and make comparisons for how the optics did I’d like to share some of my thoughts on the testing process, and the strengths and weaknesses of my methodology.  First of all it was a great opportunity to have access to such an array of scopes that I would likely never have access to, but for the thoughtfulness and generosity of my friend at U.S. Optics and Ilya, the optics guru/webmaster at Optics Thoughts.  This turned out to be a lot more than just a chance to get to play with what for me could be viewed as prohibitively priced “toys”, but a chance to learn a great deal about shooting, specifically the seeing part of it.

I tried to approach the tests in a way that I haven’t seen done before.  One of the reasons I started this blog was because I kept looking for information on certain topics that just didn’t seem to be available.  What I tried to do in these tests was evaluate gear not in terms of how neat it was, or how nice a certain facet of it appeared upon examination, but how the product as a whole and total thing affected specific aspects of my shooting.  Evaluating optical clarity or brightness, for example, may be important, and probably do relate to an aspect of work in the field.  But to take an optic see exactly how it performs in relation to other competing optics where the rubber hits the road, and to do it in a carefully measured and analyzed way is not something I have seen before.  This small article explains the strengths and shortcomings of my attempt.

The Bad:

The primary issue with the testing is one of sample size, in the number of shooters participating in the study (n=1).  I could also say that it would be better to have access to more samples of each scope, which it would, but I don’t think that scopes are as variable a good as, say, guitars.  Also, I wasn’t really trying to pick the scopes apart mechanically or optically, just seeing how well their designs and features enabled me to shoot.

With more shooters, the results of the tests would be much more likely to apply to other shooters.  As it is, they relate specifically to me.  That places the burden on me to explain how the subjective interacted with the objective test results.  Luckily, I think my mind is geared towards analyses such as these.  Hopefully I can bring something meaningful out of the results for you.

There were some issues with consistency in methodology.  These were mostly things that couldn’t be avoided.  My trigger broke during optic #2, test #3.  The replacement was quite different.

I wasn’t able to use the same mount for the SR-8c as I was for all the other optics.  I don’t think this turned out to be a big deal, but it was something different, and less ideal.  When I re-tested the Sr-8c there was really no need to re-adapt to it.

One of the major wildcards was me.  Originally my intent was to get up to speed with the AR so that I was at a plateau.  It turned out that I hit that pretty fast, and at a lot lower of skill than I had hoped.  Overall my level of skill was quite stable, with some unintended learning and adaptation going on in test #2, the X-Box drill.

The tests took place over a rather long span of time, probably about two and a half to three months.  I would have liked to have spent a significant amount of time with each optic, because I think that really getting to know the strengths and weaknesses of a system takes a lot of time.  The problem with all that time is that it’s hard to maintain a stagnant level of skill for that long a time.  I did my best, but the subconscious just wants to adapt.

The major differences between testing the optics themselves were that I spent a lot more time with the SR-8c than any other optic, and that the frequency of testing sped up at about the time I tested the SR-4c.  I also spent the week shooting just prior to my test of the SWFA scope, which I’m positive skewed the accuracy numbers favorably for that scope.

I made a few minor shooting errors in testing.  Larger sample sizes of shooters probably would have ironed this problem out.  There are times when I just messed up and, for example, screwed up the hit ratio in the 7 yard testing of the Swarovski Z6i.  I got my hold wrong with the SR-4c.  I used the wrong mark to hold with the SWFA scope.  Test #2 typically had at least one “short circuit” moment with each scope except for the SR-8c.  Some of those were more egregious than others.  I’ll note them in the tests.

Better ammo would have been nice, especially for the 100 yard precision test and the ‘long’ range transitions.  I do think that the XM193 is consistent, especially within the same lot, as I used for the testing.

The Good:

First of all, and most importantly, I think that the tests were valid, except for test #3, DD25.  I just couldn’t shoot it well enough to be consistent.  Otherwise I think the tests essentially got at what I wanted them to get at.  I will re-state the purpose of each test as I cover the results.

The state of the barrel’s cleanliness was extremely consistent, as I cleaned the barrel with Patch Out and Accelerator prior to beginning each round of tests.  By the time I got to the point where precision mattered, the barrel was thoroughly fouled, but not so much as to affect the precision negatively.

The tests were conducted with care to ensure uniformity in the procedure of setup and administration.  The same type of targets were used for each optic.  I shot them in the same places, at the same distances.  The times of day were generally the same.  The conditions didn’t change as much as they might have over such a large span of time.  When they did, such as the grass getting longer, I figured out ways to overcome that without compromising the tests, such as using my vehicle to shoot out of instead of the ground.

One thing I did that I haven’t discussed yet was that I tested the Aimpoint T1 Micro at the first 3 tests as sort of a non-magnified ‘control’ optic.  I also re-tested the SR-8c at the first three tests to see how my skills changed over the course of the testing process.  I’ll bring those results into play as I go over those specific tests.

Finally, for most of the tests I think that my shooting has been consistent enough to see the difference in the optics.  When I was less than consistent I’m at least able to recognize that the testing was essentially invalid, as with DD25, or when it changed, as with the X-Box test.

I realized that I’ve been throwing a lot of densely packed information out, and perhaps in not the most digestible fashion.  I decided to try to take after the format that Cal over at Precision Rifle Blog has been presenting his scope tests, which are done in a much more systematic and rigorous manner by the way.  Hopefully you’ll be able to make sense of the results, which are coming up next.

 

Leave a Reply

Your email address will not be published. Required fields are marked *