The On-Farm Network is growing, and so is access to trial results! Single site research reports have been organized in the database below, providing detailed information at the farm level, packaged in an easy-to-read document.
How it works:
Filter the reports in the database table below by clicking the desired Crop, Year, Trial Type, and/or Major Region. Within the database table, click on any column header to sort the table. To view a single site report, select the Trial ID to open the single site research report in a new tab.
In the database table below, trials with significant yield differences are highlighted green in the yield difference column. On each single site report, significance is indicated by a ‘yes/no’ in the overall yield results table.
A trial that does not meet the trial requirements, eg. field history, is not included in the overall average for yield difference.
Important information for interpreting statistics:
There are two statistical tests that are used to analyze On-Farm Network data:
- Paired t-tests
- Analysis of variance (ANOVA)
Coefficient of Variation (CV): This is the statistical measure of random variation in a trial. The lower the value, the less variable the data.
Confidence Level: For our trials, we use a 95% confidence level. In statistics, the confidence level indicates how certain we are of the outcome of our statistical analysis.
P-value: While a confidence level tells us how certain we are of the results we get from statistical analysis, the P-value indicates if the results are statistically significant. The P-value is a probability that is calculated through the statistical analysis process. A P-value less than 0.05 indicates a statistically significant result, but a P-value greater than 0.05 indicates the results are not significant.
Interpreting Significance: So, if our statistical analysis indicates a significant yield difference, what does that actually mean? A significant yield response (where the P-value is < 0.05) means that we are 95% sure the yield difference resulted from the treatment. Alternatively, if our statistical analysis indicates there is no significant yield difference (where the P-value is > 0.05), then we are 95% certain that the treatment had no effect on yield.
Why are statistics important? Why does significance matter? Why can’t we just look at differences in yield between treated and untreated strips to determine the effect of a treatment?
Variability in yield is expected from strip to strip across an on-farm trial due to the variability that occurs across a field. So, when we get yields from each of our trial strips at the end of the season, the question is whether those yield differences are simply a result of variability in the field, or, if the yield difference is a result of the treatment/management practice investigated in the trial. We can answer that question using statistics. If the results are statistically significant, we can say that the yield difference between treatments or management practices tested in the trial was caused by the treatment or management practice. If the result is not significant, then any yield difference is likely a result of variability within the field and not a result of the treatment or management practice.
Interpreting Results – An Example: In a soybean double inoculant trial, we test the effect of double vs. single inoculant on soybean yield. Let’s say, for example, the average yield difference between double and single inoculated soybeans for one trial was 1.5 bu/ac. This yield difference will be indicated as significant or not significant. If the yield difference is statistically significant, we can say we are 95% certain that the 1.5 bu/ac increase in yield is a result of the double inoculant treatment. But, if the 1.5 bu/ac yield difference is not significant, then the double inoculant had no effect on yield compared to single inoculant and the 1.5 bu/ac yield difference simply resulted from natural variability across the trial area.