save.ffdf and load.ffdf: Save and load your big data – quickly and neatly!

I’m very indebted to the ff and ffbase packages in R.  Without them, I probably would have to use some less savoury stats program for my bigger data analysis projects that I do at work.

Since I started using ff and ffbase, I have resorted to saving and loading my ff dataframes using ffsave and ffload.  The syntax isn’t so bad, but the resulting process it puts your computer through to save and load your ff dataframe is a bit cumbersome.  It takes a while to save and load, and ffsave creates (by default) a bunch of randomly named ff files in a temporary directory.

For that reason, I was happy to come across a link to a pdf presentation (sorry, I’ve lost it now) summarizing some cool features of ffbase.  I learned that instead of using ffsave and ffload, you can use save.ffdf and load.ffdf, which have very simple syntax:

save.ffdf(ffdfname, dir=”/PATH/TO/STORE/FF/FILES”)

Use that, and it creates a directory wherein it stores ff files that bear the same names as your column names from your ff dataframe!  It also stores an .RData and .Rprofile file as well.  Then there is:

load.ffdf(dir=”/PATH/TO/STORE/FF/FILES”)

As simple as that, you load your files, and you’re done!  I think what I like about these functions is that they allow you to easily choose where the ff files are stored, removing the worry about important files being in your temporary directory.

Store your big data!!

Which Torontonians Want a Casino? Survey Analysis Part 2

In my last post I said that I would try to investigate the question of who actually does want a casino, and whether place of residence is a factor in where they want the casino to be built.  So, here goes something:

The first line of attack in this blog post is to distinguish between people based on their responses to the third question on the survey, the one asking people to rate the importance of a long list of issues.  When I looked at this list originally, I knew that I would want to reduce the dimensionality using PCA.

library(psych)
issues.pca = principal(casino[,8:23], 3, rotate="varimax",scores=TRUE)

The PCA resulted in the 3 components listed in the table below.  The first component had variables loading on to it that seemed to relate to the casino being a big attraction with lots of features, so I named it “Go big or Go Home”.  On the second component there seemed to be variables loading on to it that related to technical details, while the third component seemed to have variables loading on to it that dealt with social or environmental issues.

Go Big or Go Home Concerned with Technical Details Concerned with Social/Environmental Issues or not Issue/Concern
Q3_A 0.181 0.751 Design of the facility
Q3_B 0.366 0.738 Employment Opportunities
Q3_C 0.44 0.659 Entertainment and cultural activities
Q3_D 0.695 0.361 Expanded convention facilities
Q3_E 0.701 0.346 Integration with surrounding areas
Q3_F 0.808 0.266 New hotel accommodations
Q3_G -0.117 0.885 Problem gambling & health concerns
Q3_H 0.904 Public safety and social concerns
Q3_I 0.254 0.716 Public space
Q3_J 0.864 0.218 Restaurants
Q3_K 0.877 0.157 Retail
Q3_L 0.423 0.676 -0.1 Revenue for the city
Q3_M 0.218 0.703 0.227 Support for local businesses
Q3_N 0.647 0.487 -0.221 Tourist attraction
Q3_O 0.118 0.731 Traffic concerns
Q3_P 0.497 0.536 0.124 Training and career development

Once I was satisfied that I had a decent understanding of what the PCA was telling me, I loaded the component scores into the original dataframe.

casino[,110:112] = issues.pca$scores
names(casino)[110:112] = c("GoBigorGoHome","TechnicalDetails","Soc.Env.Issues")

In order to investigate the question of who wants a casino and where, I decided to use question 6 as a dependent variable (the one asking where they would want it built, if one were to be built) and the PCA components as independent variables.  This is a good question to use, because the answer options, if you remember, are “Toronto”, “Adjacent Municipality” and “Neither”.  My approach was to model each response individually using logistic regression.

casino$Q6[casino$Q6 == ""] = NA
casino$Q6 = factor(casino$Q6, levels=c("Adjacent Municipality","City of Toronto","Neither"))

adj.mun = glm(casino$Q6 == "Adjacent Municipality" ~ GoBigorGoHome + TechnicalDetails + Soc.Env.Issues, data=casino, family=binomial(logit))
toronto = glm(casino$Q6 == "City of Toronto" ~ GoBigorGoHome + TechnicalDetails + Soc.Env.Issues, data=casino, family=binomial(logit))
neither = glm(casino$Q6 == "Neither" ~ GoBigorGoHome + TechnicalDetails + Soc.Env.Issues, data=casino, family=binomial(logit))

Following are the summaries of each GLM:
Toronto:

Adjacent municipality:

Neither location:

And here is a quick summary of the above GLM information:
Summary of Casino GLMs

Judging from these results, it looks like those who want a casino in Toronto don’t focus on the big social/environmental issues surrounding the casino, but do focus on the flashy and non-flashy details and benefits alike.  Those who want a casino outside of Toronto do care about the social/environmental issues, don’t care as much about the flashy details, but do have a focus on some of the non-flashy details.  Finally, those not wanting a casino in either location care about the social/environmental issues, but don’t care about any of the details.

Here’s where the issue of location comes into play.  When I look at the summary for the GLM that predicts who wants a casino in an adjacent municipality, I get the feeling that it’s picking up people living in the down-town core who just don’t think the area can handle a casino.  In other words, I think there might be a “not in my backyard!” effect.

The first inkling that this might be the case comes from an article from the Martin Prosperity Institute (MPI), who analyzed the same data set, and managed to get a very nice looking heat map-map of the responses to the first question on the survey, asking people how they feel about having a new casino in Toronto.  From this map, it does look like people in Downtown Toronto are feeling pretty negative about a new casino, whereas those in the far east and west of Toronto are feeling better about it.

My next evidence comes from the cities uncovered by geocoding the responses in the data set.  I decided to create a very simple indicator variable, distinguishing those for whom the “City” is Toronto, and those for whom the city is anything else.  I like this better than the MPI analysis, because it looks at peoples’ attitudes towards a casino both inside and outside of Toronto (rather than towards the concept of a new Casino in Toronto).  If there really is a “not in my backyard!” effect, I would expect to see evidence that those in Toronto are more disposed towards a casino in an adjacent municipality, and that those from outside of Toronto are more disposed towards a casino inside Toronto!  Here we go:

Where located by city of residenceAs you can see here, those from the outside of Toronto are more likely to suggest building a casino in Toronto compared with those from the inside, and less likely to suggest building a casino in an adjacent municipality (with the reverse being true about those from the inside of Toronto).

That being said, when you do the comparison within city of residence (instead of across it like I just did), those from the inside of Toronto seem equally likely to suggest that the casino be built in our outside of the city, whereas those outside are much more likely to suggest building the casino inside Toronto than outside.  So, depending on how you view this graph, you might only say there’s evidence for a “not in my backyard!” effect for those living outside of Toronto.

As a final note, I’ll remind you that although these analyses point to which Torontonians do want a new casino, the fact from this survey remains that about 71% of respondents are unsupportive of a casino in Toronto, and 53% don’t want a casino built in either Toronto or and adjacent municipality.  I really have to wonder if they’re still going to go ahead with it!

Do Torontonians Want a New Casino? Survey Analysis Part 1

Toronto City Council is in the midst of a very lengthy process of considering whether or not to allow the OLG to build of a new casino in Toronto, and where.  The process started in November of 2012, and set out to answer this question through many and varied consultations with the public, and key stakeholders in the city.

One of the methods of public consultation that they used was a “Casino Feedback Form“, or survey that was distributed online and in person.  By the time the deadline had passed to collect responses on this survey (January 25, 11:59pm), they had collected a whopping 17,780 responses.  The agency seemingly responsible for the survey is called DPRA, and from what I can tell they seemed to do a pretty decent job of creating and distributing the survey.

In a very surprisingly modern and democratic form, Toronto City Council made the response data for the survey available on the Toronto Open Data website, which I couldn’t help but download and analyze for myself (with R of course!).

For a relatively small survey, it’s very rich in information.  I love having hobby data sets to work with from time to time, and so I’m going to dedicate a few posts to the analysis of this response data file.  This post will not show too much that’s different from the report that DPRA has already released, as it contains largely univariate analyses.  In later posts however, I will get around to asking and answering those questions that are of a more multivariate nature!  To preserve flow of the post, I will post the R code at the end, instead of interspersing it throughout like I normally do.  Unless otherwise specified, all numerical axes represent the % of people who selected a particular response on the survey.

Without further ado, I will start with some key findings:

Key Findings

  1. With 17,780 responses, Toronto City Council obtained for themselves a hefty data set with pretty decent geographical coverage of the core areas of Toronto (Toronto, North York, Scarborough, Etobicoke, East York, Mississauga).  This is much better than Ipsos Reid’s Casino Survey response data set of 906 respondents.
  2. Only 25.7% of respondents were somewhat or strongly in favour of having a new casino in Toronto.  I’d say that’s overwhelmingly negative!
  3. Ratings of the suitability of a casino in three different locations by type of casino indicate that people are more favourable towards an Integrated Entertainment Complex (basically a casino with extra amenities) vs. a standalone casino.
  4. Of the three different locations, people were most favourable towards an Integrated Entertainment Complex at the Exhibition Place.  However, bear in mind that only 27.4% of respondents thought it was suitable at all.  This is a ‘best of the worst’ result!
  5. When asked to rate the importance of a list of issues surrounding the building of a new casino in Toronto, respondents rated as most important the following issues: safety, health, addiction, public space, traffic, and integration with surrounding areas.

Geographic Distribution of Responses

In a relatively short time, City Council managed to collect many responses to their survey.  I wanted to look at the geographic distribution of all of these responses.  Luckily, the survey included a question that asked for the first 3 characters of the respondents’ postal code (or FSA).  If you have a file containing geocoded postal codes, you then can plot the respondents on a map.  I managed to find such a file on a website called geocoder.ca, with latitude and longitude coordinates for over 90,000 postal codes).  Once I got the file into R, I made sure that all FSA codes in the survey data were capitalized, created an FSA column in the geocoded file, and then merged the geocoded dataset into the survey dataset.  This isn’t a completely valid approach, but when looking at a broad area, I don’t think the errors in plotting points on a map aren’t going to look that serious.

For a survey about Toronto, the geographic distribution was actually pretty wide.  Have a look at the complete distribution:

Total Geo Distribution of Responses

Obviously there seem to be a whole lot of responses in Southern Ontario, but we even see a smattering of responses in neighbouring provinces as well.  However, let’s look at a way of zooming in on the large cluster of Southern Ontario cities.  From the postal codes, I was able to get the city in which each response was made.  From that I pulled out what looked like a good cluster of top southern ontario cities:

          City	 # Responses
       Toronto	8389
    North York	1553
   Scarborough	1145
     Etobicoke	936
     East York	462
   Mississauga	201
       Markham	149
      Brampton	111
 Richmond Hill	79
     Thornhill	62
          York	59
         Maple	58
        Milton	30
      Oakville	30
    Woodbridge	30
    Burlington	28
        Oshawa	25
     Pickering	22
        Whitby	19
      Hamilton	17
        Bolton	14
        Guelph	13
      Nobleton	12
        Aurora	11
          Ajax	10
       Caledon	10
   Stouffville	10
        Barrie	9

Lots of people in Toronto, obviously, a fair amount in North York, Scarborough, and Etobicoke, and then it leaps downwards in frequency from there. However, these city labels are from the geocoding, and who knows if some people it places in Toronto are actually from North York (the tree, then one of its apples). So, I filtered the latitude and longitude coordinates based on this top list to get the following zoomed-in map:

Toronto Geo Distribution of ResponsesMuch better than a table!  I used transparency on the colours of the circles to help better distinguish dense clusters of responses from sparse ones.  Based on the map, I can see 3 patterns:

1) It looks like there is a huge cluster of responses came from an area of Toronto approximately bounded by Dufferin on the West, highway 404 on the east, the Gardiner on the south, and the 401 on the north.

2) There’s also an interesting vertical cluster that seems to go from well south of highway 400 and the 401, and travels north to the 407.

3) I’m not sure I would call this a cluster per se, but there definitely seems to be a pattern where you find responses all the way along the Gardiner Expressway/Queen Elizabeth Way/Kingston Road/401 from Burlington to Oshawa.

Now for the survey results!

Demographics of Respondents

This slideshow requires JavaScript.

As you can see, about 80% of the respondents disclosed their gender, with a noticeable bias towards men.  Also, most of the respondents who disclosed their age were between 25 and 64 years of age.  This might be a disadvantage, according to a recent report by Statistics Canada on gambling.  If you look at page 6 on the report, you will see that of all age groups of female gamblers, those 65 and older are spending the most amount of money on Casinos, Slot Machines, and VLTs per 1 person spending household.  However, I guess it’s better having some information than no information.

Feelings about the new casino

Feelings about the new casino

Well, isn’t that something?  Only about a quarter of all people surveyed actually have positive feelings about a new casino!  I have to say this is pretty telling.  You would think this would be damning information, but here’s where we fall into the trap of whether or not to trust a survey result.

Here we have this telling response, but then again, Ipsos Reid conducted a poll that gathered 906 responses that concluded that 52% of Torontonians “either strongly or somewhat support a new gambling venue within its borders”.  People were asked about their support of a new casino at the beginning and ending.  At the end, after they supplied people with all the various arguments supplied by both sides of the debate, they asked the question again.  Apparently the proportion supporting the casino was 54% when analyzing the second instance of the question.  They don’t even link to the original question form, so I’m left to wonder exactly how it was phrased, and what preceded it.  The only hint is in this phrase: ” if a vote were held tomorrow on the idea of building a casino in the city of Toronto…”.  Does that seem comparable to you?

A Casino for “Toronto The Good”?

Casino fit image of toronto

This question seems to be pretty similar to the first question.  If a new casino fits your image of Toronto perfectly, then you’re probably going to be strongly in favour of one!  Obviously, most people seem pretty sure that a new casino just isn’t the kind of thing that would fit in with their image of “Toronto the Good”.

Where to build a new casino

Where casino builtIn the response pattern here, we seem to see a kind of ‘not in/near my backyard’ mentality going on.  A slight majority of respondents seem to be saying that if a new casino is to be built, it should be somewhere decently far away from Toronto, perhaps so that they don’t have to deal with the consequences.  I’ll eat my sock if the “Neither” folks aren’t those who also were strongly opposed to the casino.

Casino Suitability in Downtown Area

Casino suitability at Exhibition Place

Casino Suitability at Port Lands

They also asked respondents to rate the suitability of a new casino in three different locations:

  1. A downtown area (bounded by Spadina Avenue, King Street, Jarvis Street and Queens Quay)
  2. Exhibition Place (bounded by Gardiner Expressway, Lake Shore Boulevard, Dufferin Street and Strachan Avenue)
  3. Port Lands (located south of the Don Valley and Gardiner/Lake Shore, east of the downtown core)

Looking at the above 3 graphs, you see right away that a kind of casino called an Integrated Entertainment Complex (kind of a smorgasboard of casino, restaurant, theatre, hotel, etc.) is more favourable than a standalone casino at any location.  That being said, the responses are still largely negative!  Out of the 3 options for location of an Integrated Entertainment Complex (IEC), it was Exhibition Place that rated the most positive by a small margin (18.1% said highly suitable, vs. 16.2% for downtown Toronto).  There are definitely those at the Exhibition Place who want the Toronto landmark to be chosen!

Desired Features of IEC by Location

This slideshow requires JavaScript.

These charts indicate that, for those who can imagine an Integrated Entertainment Complex in either of the 3 locations, they would like features at that locations that allow them to sit/stand and enjoy themselves.  Restaurants, Cultural and Arts Facilities, and Theatre are tops in all 3 locations (but still bear in mind that less than half opted for those choices).  A quick google search reveals that restaurants and theatres are mentioned in a large number of search results.  An article in the Toronto Sun boasts that an Integrated Entertainment Complex would catapult Toronto into the stars as even more of a tourist destination.  Interestingly, the article also mentions the high monetary value of convention visitors and how much that would add to the revenues generated for the city.  I find it funny that the popularity of having convention centre space in this survey is at its highest when people are asked about the Exhibition Place.  The Exhibition Place already has convention centre space!!  I don’t understand the logic, but maybe someone will explain it to me.

Issues of Importance Surrounding a New Casino

Issues of Importance re the New CasinoUnlike the previous graphs, this one charts the % who gave a particular response on each item.  In this case, the graph shows the % of respondents who gave gave the answer “Very Important” when asked to rate each issue surrounding the new casino.  Unlike some of the previous questions, this one did not include a “No Casino” option, so already more people can contribute positively to the response distribution.  You can already see that people are pretty riled up about some serious social and environmental issues.  They’re worried about safety, health, addiction, public space (sounds like a worry about clutter to me), traffic, and integration with surrounding areas.  I’ll bet that the people worried about these top 5 issues are the people most likely to say that they don’t want a casino anywhere.  It will be interesting to uncover some factor structure here and then find out what the pro and anti casino folks are concerned with.

For my next post, I have in mind to investigate a few simple questions so far:

  1. Who exactly wants or doesn’t want a new casino, and where?  What are they most concerned with (those who do and don’t want a casino)
  2. Is there a “not in my backyard” effect going on, where those who are closest to the proposed casino spots are the least likely to want it there, but more likely to want a casino elsewhere?  I have latitude/longitude coordinates, and can convert them into distances from the proposed casino spots.  I think that will be interesting to look at!

Split, Apply, and Combine for ffdf

Call me incompetent, but I just can’t get ffdfdply to work with my ffdf dataframes.  I’ve tried repeatedly and it just doesn’t seem to work!  I’ve seen numerous examples on stackoverflow, but maybe I’m applying them incorrectly.  Wanting to do some split-apply-combine on an ffdf, yet again, I finally broke down and made my own function that seems to do the job! It’s still crude, I think, and it will probably break down when there are NA values in the vector that you want to split, but here it is:

mtapply = function (dvar, ivar, funlist) {
  lenlist = length(funlist)
  outtable = matrix(NA, dim(table(ivar)), lenlist, dimnames=list(names(table(ivar)), funlist))
  c = 1
  for (f in funlist) {
    outtable[,c] = as.matrix(tapply(dvar, ivar, eval(parse(text=f))))
    c = c + 1
  }
  return (outtable)}

As you can see, I’ve made it so that the result is a bunch of tapply vectors inserted into a matrix.  “dvar”, unsurprisingly, is your dependent variable.  “ivar”, your independent variable.  “funlist” is a vector of function names typed in as strings (e.g. c(“median”,”mean”,”mode”).  I’ve wasted so much of my time trying to get ddply or ffdfdply to work on an ffdf, that I’m happy that I now have anything that does the job for me.

Now that I think about it, this will fall short if you ask it to output more than one quantile for each of your split levels.  If you can improve this function, please be my guest!

Know Your Dataset: Specifying colClasses to load up an ffdf

When I finally figured out how to successfully use the ff package to load data into R, I was apparently working with relatively pain free data to load up through read.csv.ffdf (see my previous post).  Just this past Sunday, I naively followed my own post to load a completely new dataset (over 400,000 rows and about 180 columns) for analysis.  Unfortunately for me, the data file was a bit messier, and so read.csv.ffdf wasn’t able to finalize the column classes by itself.  It would chug along until certain columns in my dataset, which it at first took to be one data type, proved to be a different data type, and then it would give me an error message basically telling me it didn’t want to adapt to the changing assumptions of which data type each column represented.

So, I set out to learn how I could use the colClasses argument in the read.csv.ffdf command to manually set the data types for each column.  I adapted the following solution from a stackoverflow thread about specifying colClasses in the regular read.csv function.

First, load up a sample of the big dataset using the read.csv command (The following is obviously non-random. If you can figure out how to read the sample in randomly, I think it would work much better):

headset = read.csv(fname, header = TRUE, nrows = 5000)

The next command generates a list of all the variable names in your dataset, and the classes R was able to derive based on the number of rows you imported:

headclasses = sapply(headset, class)

Now comes the fairly manual part. Look at the list of variables and classes (data types) that you generated, and look for obvious mismatches. Examples could be a numeric variable that got coded as a factor or logical, or a factor that got coded as a numeric. When you find such a mismatch, the following syntax suffices for changing a class one at a time:

headclasses["variable.name"] = "numeric"

Obviously, the “variable.name” should be replaced by the actual variable name you’re reclassifying, and the “numeric” string can also be “factor”, “ordered”, “Date”, “POSIXct” (the last two being date/time data types). Finally, let’s say you want to change every variable that got coded as “logical” into “numeric”. Here’s some syntax you can use:

headclasses[grep("logical", headclasses)] = "numeric"

Once you are certain that all the classes represented in the list you just generated and modified are accurate to the dataset, you can load up the data with confidence, using the headclasses list:

bigdataset = read.csv.ffdf(file="C:/big/data/loc.csv", first.rows=5000, colClasses=headclasses)

This was certainly not easy, but I must say that I seem to be willing to jump through many hoops for R!!

Big data analysis, for free, in R (or “How I learned to load, manipulate, and save data using the ff package”)

Before choosing to support the purchase of Statistica at my workplace, I came across the ff package as an option for working with really big datasets (with special attention paid to ff dataframes, or ffdf). It looked like a good option to use, allowing dataframes with multiple data types and way more rows than if I were loading such a dataset into RAM as is normal with R. The one big problem I had is that every time I tried to use the ffsave function to save my work from one R session to the next, it told me that it could not find an external zip utility on my Windows machine. I guess because I just had so much else going on, I didn’t have the patience to do the research to find a solution to this problem.

This weekend I finally found some time to revisit this problem, and managed to find a solution! From what I can tell, R appears to expect, in cases like the ffsave function, that you have command-line utilities like a zip utility at the ready and recognizable by R. Although I haven’t tested the ff package on either of my linux laptops at home, I suspect that R recognizes the utilities that come pre-installed on them. However, in the windows case, the solution seems to be to install a supplementary group of command-line programs called Rtools.  When you visit the page, be sure to download the version of Rtools that corresponds with your R version.

When you go through the installation process, you will see a screen like below. Be sure that you check the same boxes as in the screenshot below so that R knows where the zip utility lives.

Once you have it installed, that’s when the fun finally begins. Like in the smaller data case, I like reading in CSV files. So, ff provides read.csv.ffdf for importing external data into R. Let’s say that you have a data file named bigdata.csv, here would be a command for loading it up:

bigdata = read.csv.ffdf(file=”c:/fileloc/bigdata.csv”, first.rows=5000, colClasses=NA)

The first part of the command, directing R to your file, should look straightforward. The first.rows argument tells it how big you want the first chunk of data it reads in should be (ff reads parts of your data at a time to save RAM.  Correct me if I’m wrong).  Finally, and importantly, the colClasses=NA argument tells R not to assume the data types of each of your columns from the first chunk alone.

Now that you’ve loaded your big dataset, you can manipulate it at will.  If you look at the ff and ffbase documentation, a lot of the standard R functions for working with and summarizing data have been optimized for use with ff dataframes and vectors.  The upshot of this is that working with data stored in ffdf format seems to be a pretty similar experience compared to working with normal data frames.  Importantly, when you want to subset your data frame to create a test sample, the ffbase package replaces the subset command so that the resultant subset is also an ffdf, and doesn’t take up more of your RAM.

I noticed that you can use the glm() and lm() functions on an ffdf, but I think you have to be careful because they are not optimized for use with ffdfs and therefore will take up the usual amount of memory if you save them to your workspace.  So if you build models using these functions, be sure to select a sample from your ffdf that isn’t overly big!

Next, comes the step of saving your work.  The syntax is simple enough:

ffsave(bigdata, file=”C:/fileloc/Rwork/bigdata”)

This saves a .ffData file and a .RData file to the directory of your choice with “bigdata” as the filenames.

Then, when you want to load up your data in a new R session during some later time, you use the simple ffload command:

ffload(file=”C:/fileloc/Rwork/bigdata”)

It gives you some warning messages, but as far as I can tell they do not get in the way of accessing your data.  That covers the basics of working with big data using the ff package.    Have fun analyzing your data using less RAM! :)