Euler Problem 29 is another permutation problem that is quite easy to solve using brute force. The MathBlog site by Kristian Edlund has a nice solution using only pen and paper.

Raising number to a power can have interesting results. The video below explains why this pandigital formula approximates to billions of decimals:

## Euler Problem 29 Definition

Consider all integer combinations of: for and .

If they are then placed in numerical order, with any repeats removed, we get the following sequence of 15 distinct terms:

How many distinct terms are in the sequence generated by for and ?

## Brute Force Solution

This code simply calculates all powers from to and determines the number of unique values. Since we are only interested in their uniqueness and not the precise value, there is no need to use Multiple Precision Arithmetic.

# Initialisation target <- 100 terms <- vector() i <- 1 # Loop through values of a and b and store powers in vector for (a in 2:target) { for (b in 2:target) { terms[i] <- a^b i <- i + 1 } } # Determine the number of distinct powers answer <- length(unique(terms)) print(answer)

View the latest version of this code on GitHub.

Pingback: Pandigital Products: Euler Problem 32 – Cloud Data Architect

Pingback: Pandigital Products: Euler Problem 32 | The Devil is in the Data

Pingback: Digit fifth powers: Euler Problem 30 | A bunch of data

Pingback: Digit fifth powers: Euler Problem 30 – Cloud Data Architect

Pingback: Euler Problem 29: Distinct Powers – Mubashir Qasim

Ack! No vectorization and growing an object in a loop! You know 99 * 99 elements, so initialize it to that length! And then `^` is vectorized so you only need one for loop, `

`a^(2:target)`

` will calculate 99 terms at once. Or skip the loops entirely and use this:`length(unique(as.vector(outer(2:100, 2:100, FUN = "^"))))`

Hi Gregor,

Thanks for the lesson, I did not know the outer command.

Your one-line solution is much faster 🙂

Peter

You’re quite welcome 🙂

Really though, avoid growing objects in loops. Compare:

Initializing the object to the correct length is *much* faster than extending its length every time. With a vector, it’s pretty quick either way, but with a data frame the difference is huge:

Just form the good habit of always pre-allocating and you’ll avoid a common, needless bottleneck.

Pingback: Euler Problem 29: Distinct Powers | A bunch of data