Archive | Data RSS feed for this section

Useful Pandas Snippets

Even after almost two years of working with Pandas, the incredibly useful Python data analysis library, I still need to look up syntax for some common tasks. Finally got around to putting everything on a single “useful Pandas snippets” cheat sheet: these are essential tools for munging federal budget data.
Continue Reading →

Comments { 31 }

IPython, IPython Notebook, Anaconda, and R (rpy2)

IPython

IPython and the IPython Notebook have vast potential beyond their traditional use in the Python scientific programming community. Specifically, the Notebook is a great learning tool, and that’s something I plan to highlight in an upcoming talk at the New England Regional Developer (NERD) Summit.

Because the NERD mission is to reduce barriers for people entering IT (as opposed to having them waste years of their lives untangling Python package dependencies), the plan is to demo everything using the Anaconda Python installation. Overkill maybe, since even on Windows installing Python and IPython isn’t too terrible. That said:

  • It’s important for those who aren’t proficient on the command line to jump right in.
  • If people get hooked on Python and want to do more, they’ll have the important packages at the ready (Anaconda includes some important machine learning packages that are missing from the free version of its competitor, Enthought Canopy).

To understand the worst case pain scenario before recommending Anaconda to beginners, I installed it on Windows. So far, so good.

The only snafu I’ve hit so far is trying to get the IPython–>R integration working. This isn’t really a feature for beginners, but I want to show it because R is heavily used at local universities.

To be clear, this mess isn’t an Anaconda problem. However, if you’re using Anaconda on Windows and want to use IPython’s rmagic extension, here’s how.

  1. Install R.
  2. Add the directory with the R executables to your PATH. It has to be the directory with executables, not the main R folder. On a 64-bit machine, the directory is something like C:\Program Files\R\R-3.1.0\bin\x64.
  3. Add these two environment variables (h/t):
    • R_HOME (path of the main R folder, e.g. C:\Program Files\R\R-3.1.0)
    • R_USER (your Windows username).
  4. Restart Windows.
  5. Modify your Windows Python install registry key to point to your Anaconda Python location instead of the default Python installation (h/t). If you followed the default Anaconda install prompts (which installs for the current user, rather than all users on the machine), you’d change HKEY_CURRENT_USER\SOFTWARE\Python\PythonCore\2.7\InstallPath.
  6. Download and install Dr. Gohlke’s rpy2 Windows binary (grab version 2.4.0 or higher): http://www.lfd.uci.edu/~gohlke/pythonlibs/#rpy2.
  7. Change the registry key from step 5 back to it’s original value.
  8. Open an IPython notebook or terminal and load the rmagic extension:
    %load_ext rpy2.ipython
  9. You should be able to test everything out using this sample code.

For comparison, these are the Ubuntu instructions.

  1. Install R:
    sudo apt-get install r-base r-base-core r-base-html
  2. Install rpy2:
    pip install rpy2
Comments { 3 }

Revisiting Python on Windows

Three years ago I wrote a series of tutorials for setting up Python/Django on Windows.

Despite taking great pains to make it all work and then meticulously documenting the details, I abandoned that idea in favor of an Ubuntu VirtualBox soon after those posts went live. It’s a long story, but at some point you need to cut your losses and stop throwing good time after bad.

But this summer marks a return to Windows. I decided our data intern should learn Python or R, so he can experience a world beyond the proprietary stats packages they use at colleges. We decided on Python and decided that Windows is the best option; no need to add Linux to his already long list of things to pick up.

To test things out for him, I crossed my fingers and installed Enthought Canopy, a canned Python environment for data viz and analysis, hoping it would take away the pain of installing Python packages on Windows. For the most part, it did.

Canopy (which has a free version) makes it easy to get up and running quickly. If you’re getting started with Python data analysis, use it, and don’t spend hours of  your life installing all the packages yourself. That way lies madness.

That said, some of the latest and greatest Python data viz packages aren’t included in the Canopy distribution. If you want to learn those, you’ll have to install them yourself, which is where things can go awry. For example, if you’re on Windows and the package you’re installing needs a 64-bit C compiler, you have to follow these 6 simple steps to get one: http://springflex.blogspot.com/2014/02/how-to-fix-valueerror-when-trying-to.html

The Python data ecosystem is extremely compelling, but there’s still too many barriers for a beginner to jump right in, especially on Windows.

Comments { 1 }

Hans Rosling Bubbles for Mere Mortals

Just a Google motion charts experiment to get ready for Hack for Western Mass.

This example is a bit nonsensical, but we’ll be working this weekend on what kind of story we can tell with hunger-related data from the World Bank.

Comments { 0 }

Running iPython Notebook From Vagrant/VirtualBox

Updated 9/1/2014 to add a few more IPython Notebook dependencies.

Honestly, you’d think it would be easy to remember these four simple steps, but I never seem to. Since IPython notebook is pretty much the greatest thing since sliced bread, here’s how to run it in Vagrant/VirtualBox and access the notebook from the host machine’s browser.

  1. Make sure the prerequisite packages are installed in the virtual machine’s Python environment:*
    • jinja2
    • sphinx
    • pyzmq
    • pygments
    • tornado
    • ipython
  2. Make sure your Vagrant file is forwarding port 8888 to port 8888 (or whatever you’d like to use):
  3. In your virtual machine, run the IPython notebook server: ipython notebook ––ip=0.0.0.0
  4. View the notebook in the host’s browser: http://localhost:8888

*Alternately, you can pip install ipython[notebook] to install IPython and all Notebook dependencies. I got errors when doing this via zsh, though it worked after switching to Bash.

Update 11/6/2014: Praful Mathur left a good tip for using the pip install ipython[notebook] syntax with zsh. You have to escape the hard brackets: pip install ipython\[all\]. Thanks!

Comments { 11 }

Transitioning to Open Government Data

Earlier this fall, I was on a panel at the Association of Public Data Users annual conference. I do love going to DC and being in a room full of people who know what the Consolidated Federal Funds Report is.

The point of the presentation was:

  • Open government data is really exciting and has so much potential.
  • If it’s going to replace traditional sources of “designed” government data, people will be left behind.

Josh Tauberer’s 2nd Principle of Open Government Data says that data should be provided in its most granular form:

This principle relates to the change in emphasis from providing government information to information consumers to providing information to mediators, including journalists, who will build applications and synthesize ideas that are radically different from what is found in the source material. While information consumers typically require some analysis and simplification, information mediators can achieve more innovative solutions with the most raw form of government data

As a data person, I support this principle 100%. That said, it’s a huge change for organizations used to getting pre-packaged government information. Congratulations–you’ve just been promoted to mediator!

 

Comments { 0 }

The Demise of Government-Created Statistical Data?

Washington MonumentLike most data people, I prefer order and logic. So it was a huge shock when I joined a federal budget research organization and started learning about the orderly and logical process by which the U.S. government creates an annual budget. An orderly and logical process that Congress mostly disregards.

Really, the whole politicized debacle offends my sensibilities as a citizen and as a data professional.

Furthermore, the recent zeal for budget cuts has resulted in budget cuts that affect our ability to make smart budget cuts. Specifically, I’m talking about attacks on government-created statistical data—data that’s* used by lawmakers, social service organizations, and businesses to make decisions and allocate increasingly-scarce resources.

Two examples I’ve written about recently:

  • Is Federal Spending Transparency on the Decline?: a guest post for the Sunlight Foundation’s blog about the demise of the Consolidated Federal Funds Report and why that makes it harder to understand federal spending.
  • American Community Survey Under Attack: the House recently passed a spending bill that prohibits the Department of Commerce from funding the American Community Survey (ACS). The yearly ACS replaced the decennial census long-form questionnaire, and its data helps* state and local governments determine how to distribute funds, among other things. See here, here, and here for more information about the widespread usefulness of the ACS.

Of course, order and logic sometimes need to be tempered with a dose of pragmatism. But when our governing body is governed almost entirely by short-term thinking, we should think about not electing them again.

*Language evolves!

Comments { 0 }

Strata 2012: Making Data Work

It’s been over a month since Strata 2012. Lest they meet the same fate as scribbles from conferences long past, I’m putting my notes here instead of leaving them in whatever notebook I happened to be toting around that week.

The biggest a-ha moment, and one that I’ll be writing about in the future, came from Ben Goldacre’s keynote, when he compares big data practitioners to drunks looking for car keys only where the light shines. We focus on the data that’s available without asking, “what’s missing?” Plus, it’s fun to hear someone with a British accent say “blobogram.”

Continue Reading →

Comments { 0 }

Protovis Visualization for Older IE

Two days ago, I posted my Flare visualizations—based on a Flash/Actionscript library–explaining that we can’t yet use the D3 visualization library because it outputs SVG, which isn’t supported by older versions of IE.

The very next day, Hjalmar Gislasun of DataMarket gave a talk at O’Reilly’s Strata Conference. DataMarket faced the same problem back in 2010 after reviewing over 100 visualization libraries and choosing Protovis (a predecessor of D3). Not wanting to exclude the 20% of the world still using IE 7/8, they developed protovis-msie, a tool to convert Protovis SVG output to VML, a vector format understood by older browsers.

And…they open sourced it. So Protovis is now on the table for use at National Priorities Project. Thank you, DataMarket!

Like Flare, Protovis is no longer under active development. That said, it still has an active user community (unlike Flare). And the output won’t be Flash, so iOS is back on the table.

DataMarket’s strategy is to continue using Protovis until most IE users are on version 9 (which supports SVG) and then switch over to D3. It was refreshing to hear browser support strategies from people developing visualizations for commercial use; they don’t have the luxury of ignoring IE 8, which is tempting to do but not viable in the real world.

Comments { 0 }

Data Visualizations with Flare

Two weeks ago, the White House released President Obama’s FY 2013 budget request. Using the numbers scrubbed by NPP’s crack research team, I created a few visualizations using the Actionscript/Flash-based Flare data visualization library (h/t Washington Post and Nathan Yau).

Flare was ideal because it includes sample code for a stacked area chart with tooltips–exactly what we wanted. I had some concerns about the Flash output, but many of our website visitors use browsers that don’t support SVG (IE8), so tools like D3 aren’t an option just yet.

Here’s a preview of what we’ll include (not the final version).  The first example is built with normalized data:

Apologies, but you need Flash to view this content.

Get Adobe Flash player

For the second example (total federal spending by category), we wanted to convey the overall size of the budget over time, so we didn’t normalize the data. As a result, the huge numbers caused some formatting issues, but it’s still an interesting story–especially the 2009 spike. Also note the rise in healthcare spending over time: 7% of the budget in 1976 and 25% in 2013.

Apologies, but you need Flash to view this content.

Get Adobe Flash player

Flare makes it easy to lay out the data and create the animated transitions, and after making a few tweaks to the Flare library and the stacked area sample code, I’m happy with the way these turned out.

That said, I’d be reluctant to use Flare again. It isn’t being actively developed, and there’s nowhere to turn for help when you get stuck (also, the whole Flash thing). Visualizations are evolving, and the tools to create them–no matter how good they are–evolve too.

Comments { 1 }