Editing papers using text-to-speech software

I spent a large chunk of today editing a paper that I’m submitting for publication this week, which has inspired me to share one of my favorite tricks for editing in general. Inspired by this ProfHacker article last year, I started using a text-to-speech tool to “read” my papers to me for the final look-through.

This has the huge advantage of being completely objective, slowly-paced, and able to pick up things like doubled words or missing modifiers. There are a ton of tools out there, but I am partial to the Announcify Chrome extension, which has a nice pause button and is easy to use. Plus I like cloud software way better than desktop software, and I sometimes use it to read online articles well.

My (extremely hacky) workflow for this is as follows. Note this works especially nicely if you have two screens.

  1. Copy LaTeX text into a Google Doc, and publish the document to the web (under File).
  2. Read the published document using Announcify, skipping through tables and long equations.
  3. Edit document in Texmaker, pausing Announcify if necessary.

There was also an article about how you can use the built-in software in a Kindle to edit long documents, presumably while lounging on your couch sipping hot cocoa.

NYC Financial Crime Task Force, a.k.a the “get shit done people”

I just got back from DataGotham, an awesome conference for the data science community in NYC. All of the talks were really great and are worth checking out on YouTube, but the one I want to mention was by Michael Flowers of the NYC Financial Crime Task Force. I was aware of the NYC FCTF because one of the quants, Lauren Talbot, gave a great interview on my advisor’s blog.

This was a talk that made me want to jump out of my seat and go save the world using statistics (OK maybe I’m exaggerating… but not by much). It was so refreshing to see data analysis being implemented on a broad scale to make things immediately better for everyone involved. And I think that it’s stories like these that make people excited about statistics. Who doesn’t want to be one of the “get shit done people”?

(An aside: if you’re not initiated with the data science label, there was a really interesting talk by Harlan Harris discussing just that, complete with a survey analysis! And another aside: I’d love to see what the NYC FCTF is doing on Wall Street – I mean it is in their banner photo! – but sadly this wasn’t discussed…)

Enjoy! I assure you it’s worth the 20 minutes.


Via twitter I’ve found a new blog, GradHacker, that I’m already in love with. The archives are full of great articles that are refreshingly honest about the challenges many face in grad school, which is something that can be difficult to discuss in person since just about everyone is dealing with imposter syndrome. And there’s also a lot of good practical advice about productivity and tools.

This is of course another ___Hacker blog added to my RSS reader in addition to my beloved ProfHacker and LifeHacker.

Love for ProjectTemplate

The advantage about writing a blog post about the tools you wish that you’d used throughout grad school is that, well, it makes you check them out. I went through the ProjectTemplate tutorial, and I’m hooked. Here’s the advantages as I see them:

  1. Routine is your friend. This could really go for everything in your life. Small decisions contribue to decision fatigue, even if it’s something as simple as where to put a file. By automating as much as possible, you’re allowing yourself to save your finite willpower for real work instead of grunt work.
  2. It’s easier to start somewhere and then customize, rather than start from the ground up.  After four years in grad school, I have a system that I’ve hacked together for how to organize my analyses, but I would have rather not put the energy into creating the system in the first place. Designing a good system takes up a surprising amount of brain space, whereas modifying one takes much less.  And since the author of ProjectTemplate seems to know what he’s doing, I doubt I’ll modify much.
  3. Reproducibility should be as easy as possible. The way it works, ProjectTemplate makes it very easy to include (but not re-run) the code that you have for preprocessing the data, or other steps that you might only perform once during an analysis. And since reproducibility is such an important aspect of the scientific process, it should be as easy as possible.
  4. Finding things should also be as easy as possible. This is quite similar to reproducibility, but on the individual level. I go back to old analyses all the time to borrow code, which can be extremely frustrating to me since some of my older analyses aren’t well organized (see #2). So it’s nice that you’ll know exactly where you placed something, because you have a uniform system in place.

Just as an aside, I get the impression from the computer scientists I’ve talked to that they don’t necessarily learn “how to code” in coursework, either, but are also expected to develop a system on their own. This I don’t understand, and perhaps when schooling catches up to the computer era we’ll see a change. For example, in high school I learned the five-paragraph format for writing essays, even though very few professional essayists use the format in publications.  But it’s still a solid foundation for expression, which you can stray from as you become more confident in your abilities and command of the process. I suppose this argument requires that coding be taught in high school, but that’s another thing I’d love to see. One day!

The Setup (Part 1)

One of the more challenging things about beginning graduate school was learning what tools and software I needed in order to work efficiently. Unlike college where software requirements were laid out in front of me and everyone seemed to use the same tools, in graduate school the obscurity of the tools, as well as the number of options, seemed to multiply. Additionally since, like most Biostat students in my department, I came from a math background instead of a computing background, I had to drastically increase my computer literacy in a very short period of time. To help new students, a few of us more senior Biostat students have taken to presenting the tools we use in our departmental “Computing Club,” but a blog post really makes more sense. So without further ado, and in the style of one of my favorite blogs, I present…

The Setup (Part 1):

What hardware do you use?

I’m one of the last people in my department clinging to my PC. It’s absolutely true that Macs make it easier to get up and running with the common tools Biostatisticians need, but it’s also true that you can get all of those tools on Windows with a bit of initial work (and in my opinion you can even get more functionality and customization). I have a Lenovo ThinkPad X201 Tablet. This is my second tablet – I got the first because I wanted to take digital notes, which I did for all of grad school with some success. I ended up printing all of my notes out for studying, so I’m not sure it was more convenient than, say, scanning in my notes afterwards (a mammoth project I’m currently undertaking with all of my college coursework). However the tablet has been absolutely essential for teaching, and is very nice for annotating papers. I once bought a Mac that was not a tablet but then panicked and got this computer instead. It’s hard to take away functionality! I hook it up daily to an external monitor and use that huge Microsoft ergonomic keyboard and mouse. I’m actually not sure how so many people do not use external keyboards when they use computers so heavily. Perhaps I just have sensitive wrists.

And what software?

I do all of my research in R, which is really the academic norm for research statistics (and certainly the norm at Hopkins). I do all of my scripting in my beloved Notepad++, which becomes infinitely more awesome by using the little script NppToR. An amazing resource that Hopkins provides for it’s community is a high-powered, well backed-up computing cluster (so essentially all of my research is done on an extremely local cloud). To use the cluster you need to have an ssh client and an SCP protocol. Drastically oversimplified, what this means for me is 1) I have to have something I can open R with or run batch jobs on (the ssh client), and 2) I have to have somewhere you can drag my files. I use PuTTY for the ssh client and WinSCP for the SCP protocol. My workflow is that I open PuTTY to log onto the cluster and then log into R. Then I open WinSCP and open up my R scripts using Notepad++, and send the code to R by hitting F9 (yay NppToR!). If I’m running a batch job I open a second PuTTY window to submit those to. One last detail is that you must install and run Xming in order for graphs in R on the cluster to work. I think most students do more locally than I do, but I prefer doing as much on the cluster as possible. They always have the latest version of R and do a great job of backing things up. I find it’s less hassle once you get a good system down, and I like living on the cloud enough that that’s worth it to me. It also makes me less aware that I’m running Windows.

For writing I use LaTeX, again the academic norm. TeX is confusing the first time around, so here’s the crash course: TeX is a typesetting system, and LaTeX is the markup language. What this means is that you have to install TeX onto your computer, and then install a LaTeX editor. You “code” documents in a LaTeX editor and then compile them into PDFs, where they look pretty and professional and mathy. I use MiKTeX to install TeX, LEd as my LaTeX editor, and SumatraPDF as my PDF viewer. My workflow is that I open up LEd, then open up the PDF (using SumatraPDF) of the document I’m creating. Then whenever I compile my document, it automatically refreshes to the new version in Sumatra. You can also set it up so that if you double click a word in Sumatra, it automatically highlights it in LEd (tutorial). If you use Adobe this system won’t work because Adobe won’t refresh the document (instead you’ll get an error when you try to compile the code). Don’t use Adobe, basically.

edit: Might be converting to Texmaker as my LaTeX editor. It has spell-check and a built-in PDF Viewer, and took all of two minutes to set up. But I’d still recommend downloading Sumatra.

For presentations I still use PowerPoint (I hear you hatin’). That’s more the norm in genomics than in other biostatistics fields. We like it because we can easily share slides and put in pictures/graphs, and keep out too many equations.

I use Mendeley to organize my downloaded PDF papers. I looked for a PDF organizing solution for years before having this recommended to me, and it’s perfect. It syncs online, is cross-platform (with an iPad app), and most importantly it auto-generates bibtex files which are needed to create bibliographies within LaTeX documents. This means that creating bibtex files (complete with automatically generated citation key) is a drag-and-drop process, rather than a pasting-google-scholar-bibtex-results-into-text-editor process. I see this as a huge improvement.

On the extremely unlikely event that you use a tablet PC, I really like PDF Annotator, which you can get for free if you’re a Hopkins student. I use this software for teaching, grading and annotating papers. If I’m taking a bunch of notes the native Windows Journal software is quite nice and keeps file sizes much smaller than PDF Annotator.

What’s your dream setup? Or rather, what I wish I’d done differently…
There are a few places where I know I’m not efficient in my computing that I’d like to change, so I’ll list them here. You youngins can learn from my mistakes!

You might notice that I do my R coding on the cloud, but do my writing locally. This means that I can’t easily use Sweave, which is a really cool tool that allows you to LaTeX and run R code simultaneously (and is much applauded because it allows for easy reproducibility). Instead I have to import my data and graphs into my papers manually (the xtable package in R is essential to my life). This isn’t ideal and I’d like to change it, but the analyses I run usually take days to run, so Sweave loses a lot of it’s appeal (or at least the way I envisioned using Sweave). To ensure reproducibility I always publish my code on github. In fact I’m going to start migrating all of my research information (including project descriptions, links to papers, etc.) onto github.

I wish earlier in my career I had established 1) version control (using git), and 2) uniform project architecture (see for example ProjectTemplate). These are things that are discussed in the Hopkins computing coursework, but it’s unclear how many academics really use them. Github makes version control extremely accessible, but the disadvantage is that everything must be public unless you pay.

I wish I had discovered OneNote while taking courses because I’m sure I would have used and enjoyed it.

I also think standing desks are neat. One day…

Why “Part 1”?
Because Part 2 will be all of my non-academic software and hardware!

edit: I love The Setup and The Setup loves me! They featured me in their community section of the blog!

edit: I’ve compiled all of the tools I mention in this blog into a bitly bundle for your clicking and sharing convenience.

Hello world!

I’m starting a blog to collect various thoughts on statistics, computing, advice for graduate students, and the like.  Knowing me, I’ll very likely add in a few articles and jokes.  Hope you enjoy!