The post Writing academic articles using R Sweave and LaTeX appeared first on The Devil is in the Data.

]]>To illustrate the principles of using R Sweave and LaTeX, I recycled an essay about problems with body image that I wrote for a psychology course many years ago. You can find the completed paper and all necessary files on my GitHub repository.

Body image describes the way we feel about the shape of our body. The literature on this topic demonstrates that many people, especially young women, struggle with their body image. A negative body image has been strongly associated with eating disorders. Psychologists measure body image using a special scale, shown in the image below.

My paper measures the current and ideal body shape of the subject and the body shape of the most attractive other sex. The results confirm previous research which found that body dissatisfaction for females is significantly higher than for men. The research also found a mild positive correlation between age and ideal body shape for women and between age and the female body shape found most attractive by men. You can read the full paper on my personal website.

The R file for this essay uses the Sweave package to integrate R code with LaTeX. The first two code chunks create a table to summarise the respondents using the xtable package. This package creates LaTeX or HTML tables from data generated by R code.

The first lines of the code read and prepare the data, while the second set of lines creates a table in LaTeX code. The code chunk uses `results=tex`

to ensure the output is interpreted as LaTeX. This approach is used in most of the other chunks. The image is created within the document and saved as a pdf file and back integrated into the document as an image with appropriate label and caption.

<<echo=FALSE, results=tex>>= body <- read.csv("body_image.csv") # Respondent characteristics body$Cohort <- cut(body$Age, c(0, 15, 30, 50, 99), labels = c("<16", "16--30", "31--50", ">50")) body$Date <- as.Date(body$Date) body$Current_Ideal <- body$Current - body$Ideal library(xtable) respondents <- addmargins(table(body$Gender, body$Cohort)) xtable(respondents, caption = "Age profile of survey participants", label = "gender-age", digits = 0) @

I created this file in R Studio, using the Sweave and knitr functionality. To knit the R Sweave file for this paper you will need to install the apa6 and ccicons packages in your LaTeX distribution. The apa6 package provides macros to format papers in accordance with the requirements American Psychological Association.

The post Writing academic articles using R Sweave and LaTeX appeared first on The Devil is in the Data.

]]>The post Pandigital Products: Euler Problem 32 appeared first on The Devil is in the Data.

]]>The Numberhile video explains everything you ever wanted to know about pandigital numbers but were afraid to ask.

We shall say that an *n*-digit number is pandigital if it makes use of all the digits 1 to *n* exactly once; for example, the 5-digit number, 15234, is 1 through 5 pandigital.

The product 7254 is unusual, as the identity, 39 × 186 = 7254, containing multiplicand, multiplier, and product is 1 through 9 pandigital.

Find the sum of all products whose multiplicand/multiplier/product identity can be written as a 1 through 9 pandigital.

HINT: Some products can be obtained in more than one way so be sure to only include it once in your sum.

The pandigital.9 function tests whether a string classifies as a pandigital number. The pandigital.prod vector is used to store the multiplication.

The only way to solve this problem is brute force and try all multiplications but we can limit the solution space to a manageable number. The multiplication needs to have ten digits. For example, when the starting number has two digits, the second number should have three digits so that the total has four digits, e.g.: 39 × 186 = 7254. When the first number only has one digit, the second number needs to have four digits.

pandigital.9 <- function(x) # Test if string is 9-pandigital (length(x)==9 & sum(duplicated(x))==0 & sum(x==0)==0) t <- proc.time() pandigital.prod <- vector() i <- 1 for (m in 2:100) { if (m < 10) n_start <- 1234 else n_start <- 123 for (n in n_start:round(10000 / m)) { # List of digits digs <- as.numeric(unlist(strsplit(paste0(m, n, m * n), ""))) # is Pandigital? if (pandigital.9(digs)) { pandigital.prod[i] <- m * n i <- i + 1 print(paste(m, "*", n, "=", m * n)) } } } answer <- sum(unique(pandigital.prod)) print(answer)

Numbers can also be checked for pandigitality using mathematics instead of strings.

You can view the most recent version of this code on GitHub.

The post Pandigital Products: Euler Problem 32 appeared first on The Devil is in the Data.

]]>The post Analysing soil moisture data in NetCDF format with the ncdf4 library appeared first on The Devil is in the Data.

]]>The Australian Bureau of Meteorology publishes hydrological data in both a simple map grid and in the NetCDF format. The map grid consists of a flat text file that requires a bit of data jujitsu before it can be used. The NetCDF format is much easier to deploy as it provides a three-dimensional matrix of spatial data over time.

We are looking at the possible relationship between sewer main blockages and deep soil moisture levels. You will need to manually download this dataset from the Bureau of Meteorology website. I have not been able to scrape the website automatically. For this analysis, I use the actual deep soil moisture level, aggregated monthly in NetCDF 4 format.

The ncdf4 library, developed by David W. Pierce, provides the necessary functionality to manage this data. The first step is to load the data, extract the relevant information and transform the data for visualisation and analysis. When the data is read, it essentially forms a complex list that contains the metadata and the measurements.

The `ncvar_get`

function extracts the data from the list. The lon, lat and dates variables are the dimensions of the moisture data. The time data is stored as the number of days since 1 January 1900. The spatial coordinates are stored in decimal degrees with 0.05-decimal degree intervals. The moisture data is a three-dimensional matrix with longitue, latitude and time as dimensions. Storing this data in this way will make it very easy to use.

library(ncdf4) # Load data bom <- nc_open("Hydroinformatics/SoilMoisture/sd_pct_Actual_month.nc") print(bom) # Inspect the data # Extract data lon <- ncvar_get(bom, "longitude") lat <- ncvar_get(bom, "latitude") dates <- as.Date("1900-01-01") + ncvar_get(bom, "time") moisture <- ncvar_get(bom, "sd_pct") dimnames(moisture) <- list(lon, lat, dates)

The first step is to check the overall data. This first code snippet extracts a matrix from the cube for 31 July 2017 and plots it. This code pipe extracts the date for the end of July 2017 and creates a data frame which is passed to ggplot for visualisation. Although I use the Tidyverse, I still need reshape2 because the gather function does not like matrices.

library(tidyverse) library(RColorBrewer) library(reshape2) d <- "2017-07-31" m <- moisture[, , which(dates == d)] %>% melt(varnames = c("lon", "lat")) %>% subset(!is.na(value)) ggplot(m, aes(x = lon, y = lat, fill = value)) + borders("world") + geom_tile() + scale_fill_gradientn(colors = brewer.pal(9, "Blues")) + labs(title = "Total moisture in deep soil layer (100-500 cm)", subtitle = format(as.Date(d), "%d %B %Y")) + xlim(range(lon)) + ylim(range(lat)) + coord_fixed()

With the ggmap package we can create a nice map of a local area.

library(ggmap) loc <- round(geocode("Bendigo") / 0.05) * 0.05 map_tile <- get_map(loc, zoom = 12, color = "bw") %>% ggmap() map_tile + geom_tile(data = m, aes(x = lon, y = lat, fill = value), alpha = 0.8) + scale_fill_gradientn(colors = brewer.pal(9, "Blues")) + labs(title = "Total moisture in deep soil layer (100-500 cm)", subtitle = format(as.Date(d), "%d %B %Y"))

For my analysis, I am interested in the time series of moisture data for a specific point on the map. The previous code slices the data horizontally over time. To create a time series we can pierce through the data for a specific coordinate. The purpose of this time series is to investigate the relationship between sewer main blockages and deep soil data, which can be a topic for a future post.

mt <- data.frame(date = dates, dp = moisture[as.character(loc$lon), as.character(loc$lat), ]) ggplot(mt, aes(x = date, y = dp)) + geom_line() + labs(x = "Month", y = "Moisture", title = "Total moisture in deep soil layer (100-500 cm)", subtitle = paste(as.character(loc), collapse = ", "))

The latest version of this code is available on my GitHub repository.

The post Analysing soil moisture data in NetCDF format with the ncdf4 library appeared first on The Devil is in the Data.

]]>The post Pacific Island Hopping using R and iGraph appeared first on The Devil is in the Data.

]]>My first step was to create a list of flight connections between each of the island nations in the Pacific ocean. I am not aware of a publically available data set of international flights so unfortunately, I created a list manually (if you do know of such data set, then please leave a comment).

My manual research resulted in a list of international flights from or to island airports. This list might not be complete, but it is a start. My Pinterest board with Pacific island airline route maps was the information source for this list.

The first code section reads the list of airline routes and uses the `ggmap`

package to extract their coordinates from Google maps. The data frame with airport coordinates is saved for future reference to avoid repeatedly pinging Google for the same information.

# Init library(tidyverse) library(ggmap) library(ggrepel) library(geosphere) # Read flight list and airport list flights <- read.csv("Geography/PacificFlights.csv", stringsAsFactors = FALSE) f <- "Geography/airports.csv" if (file.exists(f)) { airports <- read.csv(f) } else airports <- data.frame(airport = NA, lat = NA, lon = NA) # Lookup coordinates for new airports all_airports <- unique(c(flights$From, flights$To)) new_airports <- all_airports[!(all_airports %in% airports$airport)] if (length(new_airports) != 0) { coords <- geocode(new_airports) new_airports <- data.frame(airport = new_airports, coords) airports <- rbind(airports, new_airports) airports <- subset(airports, !is.na(airport)) write.csv(airports, "Geography/airports.csv", row.names = FALSE) } # Add coordinates to flight list flights <- merge(flights, airports, by.x="From", by.y="airport") flights <- merge(flights, airports, by.x="To", by.y="airport")

To create a map, I modified the code to create flight maps I published in an earlier post. This code had to be changed to centre the map on the Pacific. Mapping the Pacific ocean is problematic because the -180 and +180 degree meridians meet around the date line. Longitudes west of the antemeridian are positive, while longitudes east are negative.

The `world2`

data set in the borders function of the `ggplot2`

package is centred on the Pacific ocean. To enable plotting on this map, all negative longitudes are made positive by adding 360 degrees to them.

# Pacific centric flights$lon.x[flights$lon.x < 0] <- flights$lon.x[flights$lon.x < 0] + 360 flights$lon.y[flights$lon.y < 0] <- flights$lon.y[flights$lon.y < 0] + 360 airports$lon[airports$lon < 0] <- airports$lon[airports$lon < 0] + 360 # Plot flight routes worldmap <- borders("world2", colour="#efede1", fill="#efede1") ggplot() + worldmap + geom_point(data=airports, aes(x = lon, y = lat), col = "#970027") + geom_text_repel(data=airports, aes(x = lon, y = lat, label = airport), col = "black", size = 2, segment.color = NA) + geom_curve(data=flights, aes(x = lon.x, y = lat.x, xend = lon.y, yend = lat.y, col = Airline), size = .4, curvature = .2) + theme(panel.background = element_rect(fill="white"), axis.line = element_blank(), axis.text.x = element_blank(), axis.text.y = element_blank(), axis.ticks = element_blank(), axis.title.x = element_blank(), axis.title.y = element_blank() ) + xlim(100, 300) + ylim(-40,40)

This visualisation is aesthetic and full of context, but it is not the best visualisation to solve the travel problem. This map can also be expressed as a graph with nodes (airports) and edges (routes). Once the map is represented mathematically, we can generate travel routes and begin our Pacific Island hopping.

The igraph package converts the flight list to a graph that can be analysed and plotted. The `shortest_path`

function can then be used to plan routes. If I would want to travel from Auckland to Saipan in the Northern Mariana Islands, I have to go through Port Vila, Honiara, Port Moresby, Chuuk, Guam and then to Saipan. I am pretty sure there are quicker ways to get there, but this would be an exciting journey through the Pacific.

library(igraph) g <- graph_from_edgelist(as.matrix(flights[,1:2]), directed = FALSE) par(mar = rep(0, 4)) plot(g, layout = layout.fruchterman.reingold, vertex.size=0) V(g) shortest_paths(g, "Auckland", "Saipan")

View the latest version of this code on GitHub.

The post Pacific Island Hopping using R and iGraph appeared first on The Devil is in the Data.

]]>The post Digit fifth powers: Euler Problem 30 appeared first on The Devil is in the Data.

]]>Numberphile has a nice video about a trick to quickly calculate the fifth root of a number that makes you look like a mathematical wizard.

Surprisingly there are only three numbers that can be written as the sum of fourth powers of their digits:

As is not a sum, it is not included.

The sum of these numbers is . Find the sum of all the numbers that can be written as the sum of fifth powers of their digits.

The problem asks for a brute-force solution but we have a halting problem. How far do we need to go before we can be certain there are no sums of fifth power digits? The highest digit is and , which has five digits. If we then look at , which has six digits and a good endpoint for the loop. The loop itself cycles through the digits of each number and tests whether the sum of the fifth powers equals the number.

largest <- 6 * 9^5 answer <- 0 for (n in 2:largest) { power.sum <-0 i <- n while (i > 0) { d <- i %% 10 i <- floor(i / 10) power.sum <- power.sum + d^5 } if (power.sum == n) { print(n) answer <- answer + n } } print(answer)

View the most recent version of this code on GitHub.

The post Digit fifth powers: Euler Problem 30 appeared first on The Devil is in the Data.

]]>The post Visualising Water Consumption using a Geographic Bubble Chart appeared first on The Devil is in the Data.

]]>In this post, I share this little ditty to explain how to plot a bubble chart over a map using the ggmap package.

The sample data contains a list of just over 100 readings from water meters in the city of Việt Trì in Vietnam, plus their geospatial location. This data uses the World Geodetic System of 1984 (WGS84), which is compatible with Google Maps and similar systems.

# Load the data water <- read.csv("PhuTho/MeterReads.csv") water$Consumption <- water$read_new - water$read_old # Summarise the data head(water) summary(water$Consumption)

The consumption at each connection is between 0 and 529 cubic metres, with a mean consumption of 23.45 cubic metres.

With the ggmap extension of the ggplot package, we can visualise any spatial data set on a map. The only condition is that the spatial coordinates are in the WGS84 datum. The ggmap package adds a geographical layer to ggplot by adding a Google Maps or Open Street Map canvas.

The first step is to download the map canvas. To do this, you need to know the centre coordinates and the zoom factor. To determine the perfect zoon factor requires some trial and error. The ggmap package provides for various map types, which are described in detail in the documentation.

# Load map library library(ggmap) # Find the middle of the points centre <- c(mean(range(water$lon)), mean(range(water$lat))) # Download the satellite image viettri <- get_map(centre, zoom = 17, maptype = "hybrid") g <- ggmap(viettri)

The ggmap package follows the same conventions as ggplot. We first call the map layer and then add any required geom. The point geom creates a nice bubble chart when used in combination with the `scale_size_area option`

. This option scales the points to a maximum size so that they are easily visible. The transparency (alpha) minimises problems with overplotting. This last code snippet plots the map with water consumption.

# Add the points g + geom_point(data = reads, aes(x = lon, y = lat, size = Consumption), shape = 21, colour = "dodgerblue4", fill = "dodgerblue", alpha = .5) + scale_size_area(max_size = 20) + # Size of the biggest point ggtitle("Việt Trì sự tiêu thụ nước")

You can find the code and data for this article on my GitHub repository. With thanks to Ms Quy and Mr Tuyen of Phu Tho water for their permission to use this data.

This map visualises water consumption in the targeted area of Việt Trì. The larger the bubble, the larger the consumption. It is no surprise that two commercial customers used the most water. Ggplot automatically adds the legend for the consumption variable.

The post Visualising Water Consumption using a Geographic Bubble Chart appeared first on The Devil is in the Data.

]]>The post Data Science for Water Utilities Using R appeared first on The Devil is in the Data.

]]>My work in this area is gaining popularity. Two weeks ago I was the keynote speaker at an asset data conference in New Zealand. My paper about data science strategy for water utilities is the most downloaded paper this year. This week I am in Vietnam, assisting the local Phú Thọ water company with their data science problems.

In all my talks and publications I emphasise the importance of collaboration between utilities and that we should share code because we are all sharing the same problems. I am hoping to develop a global data science coalition for water services to achieve this goal.

My book about making water utilities more customer-centric will soon be published, so time to start another project. My new book will be about *Data Science for Water Utilities Using R*. This book is currently not more than a collection of existing articles, code snippets and production work from my job. The cover is finished because it motivates me to keep writing.

This article describes my proposed chapter structure with some example code snippets. The most recent version of this code can be found on my GitHub repository. Feel free to leave a comment at the bottom of this article if you like to see additional problems discussed, or if you want to participate by sharing code.

The first chapter will provide a strategic overview of data science and how water utilities can use this discipline to create value. This chapter is based on earlier articles and recent presentations on the topic.

This chapter will make a case for using R by providing just enough information for readers to be able to follow the code in the book. A recurring theme at a data conference in Auckland I spoke at was the problems posed by the high reliance on spreadsheets. This chapter will explain why code is superior and how to use R to achieve this advantage.

This first practical chapter will discuss how to manage data from reservoirs. The core problem is to find the relationship between depth and volume based on bathymetric survey data. I started toying with bathymetric data from Pretyboy Reservoir in the state of Mayne. The code below downloads and visualises this data.

# RESERVOIRS library(tidyverse) library(RColorBrewer) library(gridExtra) # Read data if (!file.exists("Hydroinformatics/prettyboy.csv")) { url <- "http://www.mgs.md.gov/ReservoirDataPoints/PrettyBoy1998.dat" prettyboy <- read.csv(url, skip = 2, header = FALSE) names(prettyboy) <- read.csv(url, nrows = 1, header = FALSE, stringsAsFactors = FALSE) write_csv(prettyboy, "Hydroinformatics/prettyboy.csv") } else prettyboy <- read_csv("Hydroinformatics/prettyboy.csv") head(prettyboy) # Remove extremes, duplicates and Anomaly ext <- c(which(prettyboy$Easting == min(prettyboy$Easting)), which(prettyboy$Easting == max(prettyboy$Easting)), which(duplicated(prettyboy))) prettyboy <- prettyboy[-ext, ] # Visualise reservoir bathymetry_colours <- c(rev(brewer.pal(3, "Greens"))[-2:-3], brewer.pal(9, "Blues")[-1:-3]) ggplot(prettyboy, aes(x = Easting, y = Northing, colour = Depth)) + geom_point(size = .1) + coord_equal() + scale_colour_gradientn(colors = bathymetry_colours)

In the plot, you can see the lines where the survey boat took soundings. I am working on converting this survey data to a non-convex hull to calculate its volume and to determine the relationship between depth and volume.

Other areas to be covered in this chapter could be hydrology and meteorology, but alas I am not qualified in these subjects. I hope to find somebody who can help me with this part.

The quality of water in tanks and networks is tested using samples. One of the issues in analysing water quality data is the low number of data points due to the cost of laboratory testing. There has been some discussion about how to correctly calculate percentiles and other statistical issues.

This chapter will also describe how to create a water system index to communicate the performance of a water system to non-experts. The last topic in this chapter discusses analysing taste testing data.

We have developed a model to produce water balances based on SCADA data. I am currently generalising this idea by using the *igraph* package to define water network geometry. Next year I will start experimenting with a predictive model for water consumption that uses data from the Australian Census and historical data to predict future use.

Data from SCADA systems are time series. This chapter will discuss how to model this data, find spikes in the readings and conduct predictive analyses.

This chapter is based on my dissertation on customer perception. Most water utilities do not extract the full value from their customer surveys. In this chapter, I will show how to analyse latent variables in survey data. The code below loads the cleaned data set of the results of a customer survey I undertook in Australia and the USA. The first ten variables are the Personal Involvement Index. This code does a quick exploratory analysis using a boxplot and visualises a factor analysis that uncovers two latent variables.

# CUSTOMERS library(psych) # Read data customers <- read_csv("Hydroinformatics/customers.csv") # Exploratory Analyis p1 <- customers[,1:10] %>% gather %>% ggplot(aes(x = key, y = value)) + geom_boxplot() + xlab("Item") + ylab("Response") + ggtitle("Personal Involvement Index") # Factor analysis fap <- fa.parallel(customers[,1:10]) grid.arrange(p1, ncol= 2) customers[,1:10] %>% fa(nfactors = fap$nfact, rotate = "promax") %>% fa.diagram(main = "Factor Analysis")

Customer complaints are a gift to the business. Unfortunately, most business view complaints punitively. This chapter will explain how to analyse and respond to complaints to improve the level of service to customers.

One of the topics in this chapter is how to use Erlang-C modelling to predict staffing levels in contact centres.

Last but not least, economics is the engine room of any organisation. In the early stages of my career, I specialised in cost estimating, including probabilistic methods. This chapter will include an introduction to Monte Carlo simulation to improve cost estimation reliability.

This book is still in its early stages. The mind map below shows the work in progress on the proposed chapters and topic.