Skip to content

IntSquaredExponential

src.geostat.kernel.IntSquaredExponential

Bases: Kernel

Integrated Squared Exponential (IntSquaredExponential) kernel class for Gaussian Processes (GPs).

The IntSquaredExponential class defines a kernel that integrates the Squared Exponential kernel along a specified axis. This kernel is useful for modeling processes with smooth variations along one dimension, starting from a given point.

Parameters:

  • axis (int) –

    The axis along which the integration is performed (e.g., 0 for x-axis, 1 for y-axis).

  • start (float) –

    The starting point of the integration along the specified axis.

  • range (float or Variable) –

    The length scale parameter that controls how quickly the covariance decreases with distance.

Examples:

Creating and using an IntSquaredExponential kernel:

from geostat.kernel import IntSquaredExponential

# Create an IntSquaredExponential kernel integrating along the x-axis starting from 0.0 with a range of 2.0
int_sq_exp_kernel = IntSquaredExponential(axis=0, start=0.0, range=2.0)

locs1 = np.array([[0.0], [1.0], [2.0]])
locs2 = np.array([[0.0], [1.0], [2.0]])
covariance_matrix = int_sq_exp_kernel({'locs1': locs1, 'locs2': locs2, 'range': 2.0})

Notes:

  • The call method computes the integrated squared exponential covariance matrix based on the specified axis, starting point, and range.
  • The vars method returns the parameter dictionary for range using the ppp function.
  • The IntSquaredExponential kernel is suitable for modeling smooth processes with integrated covariance structures along one dimension.
Source code in src/geostat/kernel.py
class IntSquaredExponential(Kernel):
    """
    Integrated Squared Exponential (IntSquaredExponential) kernel class for Gaussian Processes (GPs).

    The `IntSquaredExponential` class defines a kernel that integrates the Squared Exponential kernel
    along a specified axis. This kernel is useful for modeling processes with smooth variations along 
    one dimension, starting from a given point.

    Parameters:
        axis (int):
            The axis along which the integration is performed (e.g., 0 for x-axis, 1 for y-axis).
        start (float):
            The starting point of the integration along the specified axis.
        range (float or tf.Variable):
            The length scale parameter that controls how quickly the covariance decreases with distance.

    Examples:
        Creating and using an `IntSquaredExponential` kernel:

        ```python
        from geostat.kernel import IntSquaredExponential

        # Create an IntSquaredExponential kernel integrating along the x-axis starting from 0.0 with a range of 2.0
        int_sq_exp_kernel = IntSquaredExponential(axis=0, start=0.0, range=2.0)

        locs1 = np.array([[0.0], [1.0], [2.0]])
        locs2 = np.array([[0.0], [1.0], [2.0]])
        covariance_matrix = int_sq_exp_kernel({'locs1': locs1, 'locs2': locs2, 'range': 2.0})
        ```

    Examples: Notes:
        - The `call` method computes the integrated squared exponential covariance matrix based on the 
            specified axis, starting point, and range.
        - The `vars` method returns the parameter dictionary for `range` using the `ppp` function.
        - The `IntSquaredExponential` kernel is suitable for modeling smooth processes with integrated 
            covariance structures along one dimension.
    """

    def __init__(self, axis, start, range):

        self.axis = axis
        self.start = start

        # Include the element of scale corresponding to the axis of
        # integration as an explicit formal argument.
        fa = dict(range=range)

        super().__init__(fa, dict(locs1='locs1', locs2='locs2'))

    def vars(self):
        return ppp(self.fa['range'])

    def call(self, e):
        x1 = tf.pad(e['locs1'][..., self.axis] - self.start, [[1, 0]])
        x2 = tf.pad(e['locs2'][..., self.axis] - self.start, [[1, 0]])

        r = e['range']
        sdiff = (ed(x1, 1) - ed(x2, 0)) / (r * np.sqrt(2.))
        k = -tf.square(r) * (np.sqrt(np.pi) * sdiff * tf.math.erf(sdiff) + tf.exp(-tf.square(sdiff)))
        k -= k[0:1, :]
        k -= k[:, 0:1]
        k = k[1:, 1:]
        k = tf.maximum(0., k)

        return k