In this issue, I’d like to talk about some stuff I’m doing outside of research and the Youtube channel. It’s about jobs.
Playing the long game
I’m a 4th year Ph.D candidate right now. According to the rules of my department, I am only allowed funding through 5 years, so I’m reaching the end of my rope. Barring a second MS or Ph.D, there is no more avoiding a full time job.
Even though I have more than a year until I graduate, I’m taking measures now to prepare myself as much as possible for job applications. I know exactly where I want to go: I want to be an industry statistician at a pharma company.
So it makes sense that I should do recon for these types of job postings.
Crawling through the cringiest site
When I first started my Ph.D, I thought that getting a job afterwards would be simple. I mean, what company wouldn’t want to hire someone with a Ph.D? By now, I’ve realized that I’m just competing against a more talented pool of candidates and that my research is not guaranteed to match with any particular company. In the end, companies want someone who can provide value immediately.
My usual approach to internships is to just apply to them when the usual recruiting season starts. I’ve gotten lucky and gotten some internships because the project was close to my research. But I don’t want to be lucky, I want to be prepared.
For the next few months, I’m going to collect all data on LinkedIn postings relevant to the job I want after graduation. By collecting these posts, I can get a better sense of what skills are requested more often than others. If I have this skill, I can make sure to explicitly include it on my profile. If not, then I know I need to learn this skill within a year.
Rather than wait to see if I’m relevant, I’ll see what it takes to be relevant.
Not just for statistics
There is no shot that I’m going to collect all that information from LinkedIn myself. Instead, I’m going to leverage R. R doesn’t just have functionality for statistics. There are also dedicated libraries to web scraping, which will let me gather information from web pages.
I already have a working version of a scraper built off RSelenium and the tidyverse. Job posting data is incredibly messy, especially with the body of the post, so processing that will need some more thought on my end. I’ll make my scraper available on Github when I think it’s in a good state.
My thinking is that I’ll collect data on a bunch of job postings and make a video all about it. Stay tuned.
Moral of this post: job hunting sucks.
See you next week.
Christian
Current State of The Channel
😵💫 What am I working on right now?
Working on a video for tips on learning statistics
🧐 What am I enjoying right now?
Book — Taking a break from business/self-improvement books for a bit. I started Blood, Sweat, and Pixels by Jason Schreier. I like playing video games, but I can’t imagine going through the hell that is video game development.
Thing — Did you know that Snickers had a high protein bar?
📺 What are my recent videos?
Edutainment — The better way to do statistics: a video explaining how Bayes’ Theorem is used in statistics. Bayesian statistics are not the usual in coursework, so lots of students don’t get exposed to these ideas.
Explainer — An easier way to do sample size calculations: a video showing how to use Monte Carlo simulation to do a sample size calculation. The code I used for this video can be found in this Github repo.
📦 My other stuff
I personally wrote guided solutions to problems from the first chapter of Andrew Gelman’s Bayesian Data Analysis. I wrote this guide to give advanced self-learners the insight to develop their statistical problem solving and implement some of these solutions in R.
Heads up! Some of the links on my issues are affiliate links, so I may get a small amount of money if you choose to buy something from these links. I only put links for stuff I actually use and consume.