The following addresses a technical point, but also illustrates some general principles for dealing with those inputs (parameters) to programs that will be varied. Such quantities are often called control parameters.
In this case the specific parameter that will vary is the grid spacing used to solve some problem with finite difference approximations. Variation of that parameter, while keeping other inputs/parameters fixed, will be essential in computing an error estimate for the calculations.
As a specific illustration of the method, we consider a finite difference calculation involving the solution of a set of ordinary differential equations in time, where we have replaced the continuum domain:
\[ 0 \le t \le t_{\rm max} \]
with a grid (mesh, lattice) of discrete times \( t^n \):
\[ \begin{eqnarray} t^n &=& (n - 1) \Delta t, \quad\quad n = 1,\, 2,\, 3,\, \ldots n_t - 1,\, n_t \\ \\ &=& 0,\, \Delta t,\, 2 \Delta t,\,\, 3 \Delta t \ldots, \left(n_t - 2 \right) \Delta t, \, (n_t - 1) \Delta t \end{eqnarray} \] where, in terms of the total number of grid points (discrete time steps), \(n_t\), the time spacing \(\Delta t\) is given by
\[ \Delta t = \frac{t_{\rm max} - t_{\rm min}}{n_t - 1} = \frac{t_{\rm max}}{n_t - 1} \] Here we first write the expression assuming that the "start time" of the simulation, \( t_{\rm min} \), is some generic value, but then specialize to the case \( t_{\rm min} = 0 \) which we can usually do. The important thing to keep in mind is that it is always some finite interval of the independent variable that is replaced with a discrete set of mesh points in the discretization process.
Note that the superscript \(n\), which labels the discrete times, runs over the values
\[ n = 1, 2, 3, \ldots, n_t - 1, n_t \]
i.e. starting at 1 rather than 0, so that there is a natural mapping of the set of discrete times to a Matlab array: recall that the indexing of Matlab arrays (vectors, matrices, ...) always starts at 1. In other programming languages, where array indices start at 0, another convention might be more convenient.
Also note that the denominator in the definition of \(\Delta t\) is \( n_t - 1\), not \(n_t\): there is always one more point in a grid than there are intervals, and \(\Delta t\) defines the length/size of each of those intervals.
As noted above, when we solve any differential equations with FDAs we will want to investigate what happens to the solution as the mesh size---\( \Delta t \) in this case---is varied.
Operationally, it is particularly convenient, although not necessary, to consider sequences of \( \Delta t \) with each member of the sequence a factor of 2 smaller than the previous, i.e
\[
\Delta t, \, \frac{\Delta t}{2}, \, \frac{\Delta t}{4},
\, \frac{\Delta t}{8}, \ldots
\]
It is then also convenient to define \( \Delta t \) in terms of a new integer-valued parameter, \( \ell \), that we will call the discretization level, or simply level, so that for any \( \ell \ge 0 \) we have
\[ n_t \equiv 2^\ell + 1 \] and \[ \Delta t \equiv \frac{t_{\rm max} - t_{\rm min}}{2^\ell} = \frac{t_{\rm max}}{2^\ell} \] This has the benefit that it is easier for us (humans) to specify a value of \(\ell\), a "simple" integer, than the corresponding value of \(n_t\) (i.e. (some power of 2) + 1), or worse, the actual value of \(\Delta t\) (i.e. 1/(some power of 2)). Moreover, and ultimately more importantly, it makes it more straightforward and clearer to write a script (program) that runs the calculation at various resolutions: \( \ell \) becomes a natural variable to use in a for loop that iterates over the calculations performed with increasing numbers of grid points (increasing resolution).
We note that very small values of \( \ell \) will generally not be of much use since they will lead to very poor approximations. As we will see, we will tend to use values such as \(\ell = 6, 7, 8, \ldots \) corresponding to \(n_t = 65, 129, 257, \ldots \). However, there are no hard and fast rules for what values of \(\ell\) should be used for any given simulation. That will generally need to be determined through (numerical) experimentation, where \(\ell\) is systematically varied until the estimated error in the calculations is satisfactory.
For any given FDA this may not always be possible. Computer resource limitations (amount of computer memory that is required, amount of time we have to wait for the calculation to complete) may preclude solution with sufficiently small mesh spacings to get a sufficiently accurate solution. When this arises a change in approach is needed. For example, we might try using a more accurate FDA (higher order approximation), or try to simplify the underlying continuum model so that the calculations are less costly. The need to balance competing demands of accuracy and computational cost is a central theme throughout computational science.
Finally we note that what constitutes "sufficiently accurate" calculations is also problem- and situation dependent. Sometimes, especially if a problem has not been solved before, and the calculations are revealing qualitatively new or unexpected phenomena, a accuracy of a few percent or so will suffice. In other circumstances, where one is trying to probe "fine structure" (details) of some process, higher accuracy will be needed.