Fluorescent (calcium) image pre-processing
DataView is primarily intended for analysing electrophysiological data, but it has some facilities that may be useful for fluorescent image analysis when fluorescence intensity is recorded as a series of values at equal-spaced time intervals. For example, in calcium imaging experiments, a transient increase in fluorescence in a localized patch of tissue indicates a brief increase in the concentration of free calcium within that patch, which in turn might indicate an increase in synaptic and/or spiking activity in that region.
DataView does not have any facility to interact with a camera directly. The assumption is that image capture is done externally and that the data are then exported in a format that DataView can read. This might typically be a CSV file, wih each column containing the time series from a separate region of interest. The columns are then imported as separate traces in DataView.
Photo-bleach compensation
In a calcium imaging experiment, the amount of fluorescence depends on the calcium concentration, but it also depends on the amount of the calcium-sensitive fluorophore present. One common problem is that when a fluorophore is exposed to light at its excitation frequency, it can be gradually but irreversibly destroyed in a process is known as photo-bleaching. This means that the emission of fluorescent light decays over time, and this can contaminate any signals due to variations in calcium concentration.
The precise molecular mechanism of photo-bleaching is not entirely clear, but empirical studies show that it often follows a negative mono- or bi-exponential time course, with a decline to some steady-state value representing the intrinsic tissue fluorescence plus that emitted by any unbleachable fluorophore. The background fluorescent can thus often be fitted to one of the following equations:
\[V_t = V_{\infty} + (V_0 - V_{\infty} )e^{-t/\tau_m} \qquad\textsf{mono-exponential}\]
\[V_t = V_{\infty} + w_0(V_0 - V_{\infty} )e^{-t/\tau_{\,0}} + w_1(V_0 - V_{\infty} )e^{-t/\tau_{\,1}} \qquad\textsf{bi-exponential}\]
where V0 is the fluorescence value at the start of exposure, Vt is the value at time t from the start, V∞ is the value after the exponential decline stabilises, τ0,1 is the exponential time constant (of the 1st and 2nd exponents for bi-exponential decline), and w0,1 is the weighting of 1st and 2nd exponents for bi-exponential decline.
If the raw data have a reasonable fit to one of these functions they can be transformed to compensate for the exponential decline. This is usually done by dividing the raw values by the fitted exponential values on a point-by-point basis. If the data were an exact fit to the curve, this would make the trace have a value of 1 throughout its length. The absolute numerical value of fluorescence intensity can be restored simply by multiplying all corrected values by the first uncorrected intensity level (i.e. the level before any photo-bleaching has occurred). Deviations from the fitted curve caused by transient changes in calcium concentration then become apparent as deviations from this value.
This is a proportional compensation method. An alternative linear method is to just subtract the fitted curve from the raw data (making all values 0 for a perfect fit), and then to add back the initial value.
One consequence of proportional compensation is that a deviation from the baseline that occurs early in the raw recording, before much photo-bleaching has occurred, will end up as a smaller signal in the compensated record than a deviation of the same absolute raw size that occurs late in the recording, after substantial photo-bleaching. However, this is often what is wanted, since the later signal represents a greater fractional change from baseline (and hence presumably a bigger calcium signal) than the earlier signal. A downside of proportional compensation is that any non-fluorescent noise in the signal is also amplified more in the later part of the recording than the earlier.
A major problem with either method is that the genuine calcium-related signal is included in the data from which the fit is derived, and thus the fit equation may not be a true representation of baseline photo-bleaching. This means that quantitative values derived after such photo-bleach compensation should be treated with considerable caution.
Remove Exponential Drop
- Load the file exponential drop.
This contains 8 traces representing calcium fluorescence levels recorded at 8 different regions of interest in a fly larva that has been genetically engineered to express a calcium-sensitive fluorophore. There is a very obvious exponential-like decline in the signal level over the period of the recording in each trace.
- Select the menu command Transform: Remove exponential drop to open the Estimate and Remove Exponential Drop dialog. (This is similar but not identical to that used to estimate a membrane time constant.)
The program makes an initial estimate of the parameters of a mono-exponential decline, and the display at the top-right of the dialog shows the raw signal with the exponential curve shown as a red line superimposed on the data (for an explanation of parts of the dialog box not mentioned in this tutorial, press F1 to see the on-line help). The line is a reasonable, but not exceptionally good, fit to the data.
- Click the Preview button in the Transform group near the bottom of the dialog.
A new window opens showing the raw data in the upper trace, and the transformed data, with exponential compensation as described above, in the lower. The compensation is applied using the estimated parameters of the mono-exponential. - Change the value of tau 0 in the main dialog to 20000 (units are ms), and note how much worse the fit gets, and how non-linear the compensated data now are.
- Click the Guess button to restore the original value (65900).
Note the BIC value of 1800. The interpretation of the BIC is described elsewhere, but the take-home message here is that lower BIC values indicate a better model fit than higher values.
- Click the Curve fit button. The program performs iterative least-squares fitting to find the parameters that minimize deviation between the data and the mono-exponential. After a while the red curve stabilizes as a new value of tau 0 is achieved. The red line is a tighter fit to the data, and apart from a deviation in the early part, the compensated data seen in the Preview window are reasonably linear.
The BIC value is now 460, confirming that the fitted curve is a better fit than the initial estimate.
- Select the Double exponential radio button, and then click Curve fit again.
When the curve stabilizes, note that the BIC value has dropped to 126, indicating that the improvement in the fit brought about by fitting the double exponentials is “worth it” compared to the mono-exponential, even though it requires more parameters. The transformed data are now reasonably linear when viewed in the Preview window.
What Could Possibly Go Wrong?
The fit process stops when either the convergence criterion is met, which means that the percentage change in parameters values is less than the specified criterion for 3 successive iterations, or when the number of iterations reaches the set Maximum iterations. In the latter case, you can run the fit again to see whether the criterion can be reached with further iterations, or just work with the parameter values that have been achieved so far.
If the data are a poor fit to the model, the fit parameters may go off to extreme and obviously incorrect values. This is flagged by a message saying the fit “Failed to converge”, followed by restoration of the original parameters. You could try reducing the iteration Step size to see whether that helps, or manually adjusting the starting conditions to try to get a better initial fit. But if the data are too distorted, automatic fitting may be impossible, and you have to abandon the transformation, or accept the best fit that you can achieve by manual adjustment.
Manual Transform a Single Trace
At this stage, you could save the individual compensated trace to a file simply by clicking the Transform button. Before doing this you should select whether to write a new file or overwrite the original (only possible for native DataView files), and whether to add the transformed trace as a new trace, or to replace the existing raw trace.
Auto-Transform Multiple Traces
There are 8 traces in the exponential drop file, and it would be tedious to manually transform each in turn. However, we can batch process traces so that the steps are carried out automatically
- Enter 1-8 into the Batch fit edit box.
- Ensure that the Write new file box is checked (it is by default).
- Select the Replace trace radio button option
- Click the Transform button.
- Enter a new file name when prompted, followed by a comment if desired.
- Watch the Progress dialog box as each trace is processed. You can click Cancel to stop the process, or Skip if you want to ignore a trace.
The auto transform process takes each selected trace in turn and tries to fit first a mono- and then a bi-exponential curve. Whichever fit has the lower BIC value is used to transform that trace. The Progress box shows which traces have been processed, the BIC values and number of iterations for each exponential, and the exponential type used in the transform.
- When all traces have been processed, the Progress box exposes a Close button (and hides the Skip and Cancel buttons).
The Progress dialog above shows that the bi-exponential fit for trace 4 did not converge within the Maximum iterations (500), but that its BIC was still (just) below that for the mono-exponential and so has been used. The bi-exponential fit for trace 5 shows the “failed to converge” message, and so the mono-exponential fit, which converged successfully, has been used. [Note that you cannot compare BIC values between traces – they are only useful for comparing fits of different models to the same data.]
- Click Close to dismiss the Transform dialogs. The new file with transformed traces loads automatically.
- Select the View: Matrix view: Show menu command to see thumbnails of the transformed traces. They are not well aligned within their windows at this stage.
- Hold down the control key and click the Autoscale toolbar button () above the main chart view. Then control-click the Reduce gain toolbar button ().
- If you alternately activate the original and the transformed file in the main chart view while observing the Matrix view, you can readily see how the transform has removed the exponential drop apparent in each trace of the original file.
ΔF/F0
When comparing time-series changes in fluorescence between different tissues and different experiments, it is usual to normalize the levels in some way because unavoidable differences in fluorophore loading and tissue characteristics can significantly affect absolute fluorescence levels.
One fairly standard procedure in fluorescent image analysis is to use the signal-to-baseline ratio (SBR), also called ∆F/F0 normalization. In this, F0 is a measure of the baseline fluorescence in the resting state, and ∆F is the moment-by-moment deviation from that baseline. Thus:
\[SBR_t = \frac{\Delta{F_t}}{F_0} = \frac{F_t-F_0}{F_0}\]
There are several different methods for determining F0 described in the literature, and some of these are available in DataView.
- Select the Transform: Normalize: delta F/F0 menu command, which activates the Normalize dialog box with the delta F/F0 option selected.
The Camera baseline is normally left at 0, but can be set to a positive value if the imaging camera has a non-zero output even in darkness. This value is subtracted from both Ft and F0 during normalization (unless the F0 value is set explicitly by the user - see below).
The F0 offset is also normally left at 0, but can be set to a positive value and added to the divisor during normalization if there is a risk that on-the-fly F0 calculation might yield values very close to zero.
There are five options for obtaining F0 values in DataView.
- F0 is set to the average Ft value measured from the trace over the whole recording. This may be appropriate in a preparation that is continuously active and it is not possible to measure a "resting" value. This can yield negative ∆F/F0 values.
- F0 is set to the average Ft value measured from the trace over a user-specified time window in the recording. This may be appropriate if a preparation is initially quiescent, and then something is done to activate it. The F0 measurement can be made during the quiescent period.
- F0 is set to an explicit value chosen by the user. This could be useful if one of the traces contained a recording from a non-reactive region of tissue. The average value of this trace could be obtained from the Analyse: Measure data: Whole-trace statistics menu command and used directly as F0 for the other traces.
- The percentile filter option continuously updates F0 throughout the recording. It does this by passing a sliding window of user-specified duration over the data, and setting F0 to the specified percentile value (typically 20%) within the window (e.g. Mu et al., 2019). This not only normalizes Ft, but also filters the ∆F/F0 values to emphasize transient changes at the expense of plateaus. However, it should be used with care because this filter type is not well characterized from a theoretical signal-processing perspective.
- The lowest sum process finds the user-specified window of data in the trace with the lowest average value in the recording, and uses this as F0. This may be useful if quiescent periods of data occur at unpredicatable times within a recording.
Note that in all these cases except the user-specified value, the F0 value is calculated independently for each trace and applied to that trace only. If you wish to apply the F0 value derived from one trace to all the traces in a recording, you should calculate it separately using the various analysis procedures in DataView, and then set F0 to this user-specified value.