The 14th-century maxim referred to as Ockham’s Razor, paraphrased by Jefferys and Berger (1992) as “It’s vain related to more what you can do with less”, is generally put on the interpretation of scientific results. However, it applies as well to selection of analysis. Thus if a person has a simple environmental data set, composed of couple of species and couple of samples, ordination isn’t useful. In this situation, the information are easiest to interpret inside a simple table.

Inside a typical data set, however, you will find a large number of species and samples. It’s impossible for that human mind to concurrently contemplate a large number of dimensions. The objective of ordination would be to profit the implementation of Ockham’s Razor: a couple of dimension is simpler to know than many dimensions. A great ordination technique can determine the most crucial dimensions (or gradients) inside a data set, and ignore “noise” or chance variation.

Both indirect and direct gradient analysis have the possibility to lessen the dimensionality of the data set. However, decrease in dimensionality isn’t the only need to use ordination. Before the introduction of CCA, most broadly-used ordination techniques were indirect, and also the primary aim of ordination was considered “exploratory” (Gauch 1982). It had been the task from the ecologist to make use of their understanding and intuition to gather and interpret data pure objectivity may potentially interfere having the ability to distinguish important gradients. Ordination was frequently regarded as much a skill like a science.

Once CCA was available, multivariate direct gradient analysis grew to become achievable. It grew to become easy to rigorously test record ideas and exceed mere “exploratory” analysis.

However, testing ideas requires complete objectivity, which leads to repeatability and falsifiability. The 2 fundamental motivations for multivariate direct gradient analysis, hypothesis testing and exploratory analysis, conflict with one another to some degree:

**Table 1.** Hypothesis-driven analysis, exploratory analysis, as well as their major characteristics and motivations. This table pertains to regression techniques and indirect gradient analysis additionally to CCA.

Motivating Question: “Can One reject the null hypothesis that species are unrelated to some postulated ecological factor or factors?”

Motivating Question: “How do i optimally explain or describe variation within my data set?”

sites should be associated with world: random, stratified random, regular placement

sites could be “experienced” or subjectively located

analyses should be planned *a priori*

“data diving” allowable *publish-hoc* analyses, explanations, ideas OK

*p* -values significant

*p –* values merely a rough guide

stepwise techniques not valid without mix-validation

stepwise techniques (e.g. forward selection) valid and helpful.

To carry out a hypothesis-driven analysis, you have to be very specific concerning the analyses one desires to perform. The null hypothesis should be clearly mentioned, and also the data should be collected inside a repeatable manner. Usually, the sampling design calls for random, stratified random, or regular distribution of study plots. If there’s any subjectivity involved with locating or orienting study plots, the outcomes are technically not valid.

All the analyses, including variations of information transformation and employ of various ordination options (e.g. detrending or otherwise), should be planned ahead of time, otherwise the consumer runs the chance of “data diving” or “data mining”, i.e. getting an artificially significant result since several choices are attempted. Stepwise techniques (discussed later) are automated types of “data diving”, and can typically also result in incorrect record inference (High cliff 1987, Draper and Cruz 1981). The reward for rigorously sticking to those rather stringent criteria would be that the record inference (i.e. the *p* -value) applies.

Exploratory analyses might lack record rigor, but they’re still a mainstay of plant life research. The objective of exploratory analysis is to locate pattern anyway, that is an inherently subjective enterprise. Exploratory analyses incorporate the knowledge, skill, and intuition from the investigator in to the experiment. Unless of course you’ll find another investigator with identical knowledge, skill and intuition, the analyses aren’t strictly repeatable, and therefore are hence not falsifiable. While you’ll be able to perform exploratory analyses on sample plots located based on an extensive, objective sampling design, such careful placement is not required. Indeed, an exploratory analysis could be aided when the investigator subjectively places study plots in locations she or he views to become important or interesting. Orienting plots within plant life which seems homogeneous is extremely subjective, but very helpful in evaluating variations between plots.

With exploratory analysis, “data diving” (e.g. using different transformations of species abundances, modifying ordination options, selecting different subsets of ecological variables, or selecting different subsets of study plots) is not to become prevented. Rather, it’s a method for the investigator to understand more about the information set. Stepwise analysis is a kind of automated data diving. It’s helpful like a tool to assist uncover “important” or “interesting” variables.

Ecologists are frequently mislead into believing that *p* -values from stepwise methods possess a rigorous meaning, which the outcomes of stepwise methods give the perfect model. Such thinking is fake.

You’ll be able to combine exploratory analysis and hypothesis-driven analysis right into a bigger study. Just one way of carrying this out is to carry out a 2-phase study, where the first phase is definitely an exploratory analysis, possibly involving subjectively located plots and employing many variations on analysis. The patterns based in the first phase will be posed as ideas for that second phase. The 2nd phase requires the assortment of fresh data from fairly located plots, as well as an entirely planned data analysis.

Another method to combine the 2 major kinds of analysis is thru data set subdivision. The information set is at random split into two subsets: an *exploratory* subset along with a *confirmatory* subset (alternatively known as *model building* and *model validation*. correspondingly). Many, varied analyses can be carried out around the exploratory subset (including stepwise analysis) – and the like analyses can depend on intuition, hunches, or superstition. If interesting patterns are located regarding particular ecological variables, and taking advantage of particular data transformations, these patterns could be statistically tested while using confirmatory subset. To make use of data set subdivision correctly, samples should be fairly located.

## Literature reported

High cliff, N. 1987. Analyzing Multivariate Data. Harcourt Brace Jovanovich, Publishers, North Park, California.

Draper, N. R. and H. Cruz. 1981. Applied Regression Analysis. second edition. Wiley, New You are able to.

Gauch, H. G. Junior. 1982. Multivariate Analysis and Community Structure. Cambridge College Press, Cambridge.

Hallgren, E. M. W. Palmer, and P. Milberg. 1999. Data diving with mix validation: an analysis of broad-scale gradients in Swedish weed communities. Journal of Ecosystem **87** :1037-1051.

Jefferys, W. H. and J. O. Berger. 1992. Ockham’s Razor and Bayesian Analysis. Am. Sci. **80** :64-72.