# What are Datacubes?
Datacubes are multidimensional arrays with one or more spatial or temporal dimension(s). They are the way in which data is represented in OpenEO. They provide a nice and tidy interface for spatiotemporal data as well as the operations you may want to execute on it. As they are arrays, it might be easiest to look at raster data as an example, even though datacubes can hold vector data as well. Our example data however consists of a 6x7 raster with 4 bands [
near-infrared] and 3 timesteps [
2020-10-25], displayed here in an orderly, timeseries-like manner:
It is important to understand that datacubes are designed to make things easier for us, and are not literally a cube, meaning that the above plot is just as good a representation as any other. That is why we can switch the dimensions around and display them in whatever way we want, including the view below:
A dimension refers to a certain axis of a datacube. Our exemplary raster datacube has the spatial dimensions
y, and the temporal dimension
t. Furthermore it has a
bands dimension, extending into the realm of what kind of information is contained in the cube.
The following properties are usually available for dimensions:
- axis / number
- type (spatial/temporal/bands/other)
- extents or nominal dimension labels
- reference system / projections
Here is an overview of the dimensions contained in our example datacube above:
|#||dimension name||dimension labels||resolution|
|1|| || ||10m|
|2|| || ||10m|
|3|| || ||4 bands|
|4|| || ||12 days|
# Applying Processes on Dimensions
Some processes are typically applied "along a dimension". You can imagine said dimension as an arrow and whatever is happening as a parallel process to that arrow. It simply means: "we focus on this dimension right now".
The resolution of a dimension gives information about what interval lies between observations. This is most obvious with the temporal resolution, where the intervals depict how often observations were made. Spatial resolution gives information about the pixel spacing, meaning how many 'real world meters' are contained in a pixel. For the
bands dimension, the resolution would correspond to the spectral distance between consecutive bands, but this usually varies from band to band.
# Processes on Datacubes
In the following part, the basic processes for manipulating datacubes are introduced.
When filtering data (e.g.
filter_spatial (opens new window),
filter_temporal (opens new window),
filter_bands (opens new window)), only the data that satisfies a condition is returned. For example, this condition could be a timestamp or interval, (a set of) coordinates, or specific bands. By applying filtering the datacube becomes smaller, according to the selected data.
filter([🌽, 🥔, 🐷], isVegetarian) => [🌽, 🥔]
In the image, the example datacube can be seen at the top with labeled dimensions. The filtering techniques are displayed separately below. On the left, the datacube is filtered temporally with the interval
["2020-10-15", "2020-10-27"]. The result is a cube with only the rasters for the timestep that lies within that interval (
"2020-10-25") and unchanged bands and spatial dimensions. Likewise, the original cube is filtered for a specific band
["nir"] in the middle and a specific spatial region
[Polygon(...)] on the right.
apply* functions (e.g.
apply (opens new window),
apply_neighborhood (opens new window),
apply_dimension (opens new window)) employ a process on the datacube that calculates new pixel values for each pixel, based on
n other pixels.
apply([🌽, 🥔, 🐷], cook) => [🍿, 🍟, 🍖]
For the case
n = 1 this is called a unary function and means that only the pixel itself is considered when calculating the new pixel value. A prominent example is the
absolute() function, calculating the absolute value of the input pixel value.
n is larger than 1, the function is called n-ary. In practice, this means that the pixel neighbourhood is taken into account to calculate the new pixel value. Such neighbourhoods can be of spatial and/or temporal nature. A spatial function works on a kernel that weights the surrounding pixels (e.g. smoothing values with nearby observations), a temporal function works on a time series at a certain pixel location (e.g. smoothing values over time). Combinations of types to n-dimensional neighbourhoods are also possible.
In the example below, an example weighted kernel (shown in the middle) is applied to the cube (via
apply_kernel (opens new window)). To avoid edge effects (affecting pixels on the edge of the image with less neighbours), a padding has been added in the background.
Of course this also works for temporal neighbourhoods (timeseries), considering neighbours before and after a pixel. To be able to show the effect, two timesteps were added in this example figure. A moving average of window size 3 is then applied, meaning that for each pixel the average is calculated out of the previous, the next, and the timestep in question (tn-1, tn and tn+1). No padding was added which is why we observe edge effects (NA values are returned for t1 and t5, because their temporal neighbourhood is missing input timesteps).
Alternatively, a process can also be applied along a dimension of the datacube, meaning the input is no longer a neighbourhood of some sort but all pixels along that dimension (
n equals the complete dimension). If a process is applied along the
time dimension (e.g. a breakpoint detection), the complete pixel timeseries are the input. If a process is applied along the
spatial dimensions (e.g. a
mean), all pixels of an image are the input. The process is then applied to all pixels along that dimension and the dimension continues to exist. This is in contrast to reduce. In the image below, a
mean is applied to the
time dimension. An example pixel timeseries is highlighted by a green line and processed step-by-step.
In a resampling processes (e.g.
resample_cube_spatial (opens new window),
resample_cube_temporal (opens new window)), the layout of a certain dimension is changed into another layout, most likely also changing the resolution of that dimension. This is done by mapping values of the source (old) datacube to the new layout of the target (new) datacube. During that process, resolutions can be upscaled or downscaled (also called upsampling and downsampling), depending on whether they have a finer or a coarser spacing afterwards. A function is then needed to translate the existing data into the new resolution. A prominent example is to reproject a datacube into the coordinate reference system of another datacube, for example in order to merge the two cubes.
resample(🖼️, downscale) => 🟦
resample(🌍, reproject) => 🗺️
The first figure gives an overview of temporal resampling. How exactly the input timesteps are rescaled to the output timesteps depends on the resampling function.
The second figure displays spatial resampling. Observe how in the upsampling process, the output datacube has not gained in information value. The resulting grid still carries the same pixel information, but in higher spatial resolution. Other upsampling methods may yield smoother results, e.g. by using interpolation.
reduce_dimension (opens new window) process collapses a whole dimension of the datacube. It does so by using some sort of reducer, which is a function that calculates a single result from an amount of values, as e.g.
max() are. For example we can reduce the time dimension (
t) of a timeseries by calculating the mean value of all timesteps for each pixel. We are left with a cube that has no time dimension, because all values of that dimension are compressed into a single mean value. The same goes for e.g. the spatial dimensions: If we calculate the mean along the
y dimensions, we are left without any spatial dimensions, but a mean value for each instance that previously was a raster is returned. In the image below, the dimensions that are reduced are crossed out in the result.
reduce(🥬, 🥒, 🍅, 🧅) => 🥗
Think of it as a waste press that does math instead of using brute force. Given a representation of our example datacube, let's see how it is affected.
An aggregation of a datacube can be thought of as a grouped reduce. That means it consists of two steps:
- Grouping via a grouping variable, i.e. spatial geometries or temporal intervals
- Reducing these groups along the grouped dimension with a certain reducer function, e.g. calculating the mean pixel value per polygon or the maximum pixel values per month
While the layout of the reduced dimension is changed, other dimensions keep their resolution and geometry. But in contrast to pure
reduce, the dimension along which the reducer function is applied still exists after the operation.
aggregate(👪 👩👦 👨👩👦👦, countFamilyMembers) => [3️⃣, 2️⃣, 4️⃣]
A temporal aggregation (e.g.
aggregate_temporal (opens new window)) is similar to the downsampling process, as it can be seen in the according image above. Intervals for grouping can either be set manually, or periods can be chosen: monthly, yearly, etc. All timesteps in an interval are then collapsed via a reducer function (
max, etc.) and assigned to the given new labels.
A spatial agregation (e.g.
aggregate_spatial (opens new window)) works in a similar manner. Polygons, lines and points can be selected for grouping. Their spatial dimension is then reduced by a given process and thus, a vector cube is returned. The vector cube then has dimensions containing features, attributes and time. In the graphic below, the grouping is only shown for the first timestep.