I simply think this is not worth effort. Answer one question, what is difference in long run between 10/20/30/40/50 and 30/30/30/30/30?
Expected number of landed hits:
1.5 for either configuration
Chance to fail to land a hit:
with 10/20/30/40/50: 0.9*0.8*0.7*0.6*0.5 = 0.1512 ~ 15%
with 30/30/30/30/30: 0.7^5 = 0.16807 ~ 17%
Chance to land all shots:
with 10/20/30/40/50: 0.1*0.2*0.3*0.4*0.5 = 0.0012 ~ 0.12%
with 30/30/30/30/30: 0.3^5 = 0.00243 ~ 0.24%
Alright, it's not that different, but it is different. The results will also vary for other numbers of hits, but those are more complicated and I'm at work. The point is, although both weapons have the same expected number of shots, one is more reliable to land
a hit, while the other is more likely to land
many hits. We tend to just consider averages (average outcome is the same, everything is the same), but there's more to distributions of numbers than that.
Now is it worth the extra coding? I can't tell because my coding for OXC has been minimal and it would be a
lot of work for me (discover how to add new ruleset item properties, amongst other things). Using a simple quadratic formula like we have for damage scaling, but instead defining chance to hit as "F*(a + b*i + c*i^2)" where "i" is the shot number in the autoshot (so i = 2 means the 2nd shot) would enable one to define the effect of "walking your shot" (or alternatively, recoil messing up with your aim on rifles, thus decreasing accuracy as more shots are fired by defining negative parameters, or both) for autoshots.