vignettes/rnrfa-vignette.Rmd
rnrfa-vignette.Rmd
The UK National River Flow Archive serves daily streamflow data, spatial rainfall averages and information regarding elevation, geology, land cover and FEH related catchment descriptors.
There is currently an API under development that in future should provide access to the following services: metadata catalogue, catalogue filters based on a geographical bounding-box, catalogue filters based on metadata entries, gauged daily data for about 400 stations available in WaterML2 format, the OGC standard used to describe hydrological time series.
The information returned by the first three services is in JSON format, while the last one is an XML variant.
The RNRFA package aims to achieve a simpler and more efficient access to data by providing wrapper functions to send HTTP requests and interpret XML/JSON responses.
The rnrfa package depends on the gdal library, make sure you have it installed on your system before attempting to install this package.
R package dependencies can be installed running the following code:
install.packages(c("cowplot", "httr", "xts", "ggmap", "ggplot2", "sp", "rgdal", "parallel", "tibble"))
This demo makes also use of external libraries. To install and load them run the following commands:
packs <- c("devtools", "DT", "leaflet")
install.packages(packs)
lapply(packs, require, character.only = TRUE)
The stable version of the rnrfa package is available from CRAN:
install.packages("rnrfa")
Or you can install the development version from Github with devtools:
devtools::install_github("cvitolo/rnrfa")
Now, load the rnrfa package:
The function stations_info() returns a vector of all NRFA station identifiers.
# Retrieve station identifiers:
allIDs <- station_ids()
head(allIDs)
The function catalogue() retrieves information for monitoring stations. The function, used with no inputs, requests the full list of gauging stations with associated metadata. The output is a tibble containing one record for each station and as many columns as the number of metadata entries available.
# Retrieve information for all the stations in the catalogue:
allStations <- catalogue()
head(allStations)
The columns are briefly described below (see also API documentation):
id
The station identifier.name
The station name.catchment-area
The catchment area (in km2).grid-reference
The station grid reference. For JSON output the grid-reference is represented as an object with the following properties:
ngr
(String) The grid reference in string form (i.e. “SS9360201602”).easting
(Number) The grid reference easting (in metres).northing
(Number) The grid reference northing (in metres).lat-long
The station latitude/longitude. For JSON output the lat-long is represented as an object with the following properties:
string
(String) The textual representation of the lat/long (i.e. “50°48’15.0265”N 3°30’40.7121“W”).latitude
(Number) The latitude (expressed in decimal degrees).longitude
(Number) The longitude (expressed in decimal degrees).river
The name of the river.location
The name of the location on the river.station-level
The altitude of the station, in metres, above Ordnance Datum or, in Northern Ireland, Malin Head.easting
The grid reference easting.northing
The grid reference northing.station-information
Basic station information: id, name, catchment-area, grid-reference, lat-long, river, location, station-level, measuring-authority-id, measuring-authority-station-id, hydrometric-area, opened, closed, station-type, bankfull-flow, structurefull-flow, sensitivity. category.The same function catalogue() can be used to filter stations based on a bounding box or any of the metadata entries.
# Define a bounding box:
bbox <- list(lon_min = -3.82, lon_max = -3.63, lat_min = 52.43, lat_max = 52.52)
# Filter stations based on bounding box
catalogue(bbox)
# Filter based on minimum recording years
catalogue(min_rec = 100)
# Filter stations belonging to a certain hydrometric area
catalogue(column_name="river", column_value="Wye")
# Filter based on bounding box & metadata strings
catalogue(bbox, column_name="river", column_value="Wye")
# Filter stations based on threshold
catalogue(bbox, column_name="catchment-area", column_value=">1")
# Filter based on minimum recording years
catalogue(bbox, column_name = "catchment-area",
column_value = ">1",
min_rec = 30)
# Filter stations based on identification number
catalogue(column_name="id", column_value=c(3001,3002,3003))
The RNRFA package allows convenient conversion between UK grid reference and more standard coordinate systems. The function “osg_parse()”, for example, converts the string to easting and northing in the BNG coordinate system (EPSG code: 27700), as in the example below:
# Where is the first catchment located?
someStations$`grid-reference`$ngr[1]
# Convert OS Grid reference to BNG
osg_parse("SN853872")
The same function can also convert from BNG to latitude and longitude in the WSGS84 coordinate system (EPSG code: 4326) as in the example below.
# Convert BNG to WSGS84
osg_parse(grid_refs = "SN853872", coord_system = "WGS84")
osg_parse() also works with multiple references:
osg_parse(grid_refs = someStations$`grid-reference`$ngr)
The first column of the table “someStations” contains the id number. This can be used to retrieve time series data and convert waterml2 files to time series object (of class zoo).
The National River Flow Archive serves two types of time series data: gauged daily flow and catchment mean rainfall.
These time series can be obtained using the functions gdf() and cmr(), respectively. Both functions accept three inputs:
id
, the station identification numbers (single string or character vector).
metadata
, a logical variable (FALSE by default). If metadata is TRUE means that the result for a single station is a list with two elements: data (the time series) and meta (metadata).
cl
, This is a cluster object, created by the parallel package. This is set to NULL by default, which sends sequential calls to the server.
Here is how to retrieve mean rainfall (monthly) data for Shin at Lairg (id = 3001) catchment.
# Fetch only time series data from the waterml2 service
info <- cmr(id = "3001")
plot(info)
# Fetch time series data and metadata from the waterml2 service
info <- cmr(id = "3001", metadata = TRUE)
plot(info$data, main=paste("Monthly rainfall data for the",
info$meta$stationName,"catchment"),
xlab="", ylab=info$meta$units)
Here is how to retrieve (daily) flow data for Shin at Lairg (id = 3001) catchment.
# Fetch only time series data
info <- gdf(id = "3001")
plot(info)
# Fetch time series data and metadata from the waterml2 service
info <- gdf(id = "3001", metadata = TRUE)
plot(info$data, main=paste0("Daily flow data for the ",
info$meta$station.name,
" catchment (",
info$meta$data.type.units, ")"))
Upgrade your data.frame to a data.table:
Create interactive maps using leaflet:
library(leaflet)
leaflet(data = someStations) %>% addTiles() %>%
addMarkers(~longitude, ~latitude, popup = ~as.character(paste(id,name)))
Interactive plots using dygraphs:
library(dygraphs)
dygraph(info$data) %>% dyRangeSelector()
Sequential vs Concurrent requests: a simple benchmark test
library(parallel)
# Use detectCores() to find out many cores are available on your machine
cl <- makeCluster(getOption("cl.cores", detectCores()))
# Filter all the stations within the above bounding box
someStations <- catalogue(bbox)
# Get flow data with a sequential approach
system.time(s1 <- gdf(someStations$id, cl = NULL))
# Get flow data with a concurrent approach (using `parLapply()`)
system.time(s2 <- gdf(id = someStations$id, cl = cl))
stopCluster(cl)
The measured flows are expected to increase with the catchment area. Let’s show this simple regression on a plot:
# Calculate the mean flow for each catchment
someStations$meangdf <- unlist(lapply(s2, mean))
# Linear model
library(ggplot2)
ggplot(someStations, aes(x = as.numeric(`catchment-area`), y = meangdf)) +
geom_point() +
stat_smooth(method = "lm", col = "red") +
xlab(expression(paste("Catchment area [Km^2]",sep=""))) +
ylab(expression(paste("Mean flow [m^3/s]",sep="")))