Chemical-free Chemist
For the next month at least, I will be working from home (WFH). Running experiments is distinctly difficult without access to facilities. Therefore, I have changed my work goals and projects for the next four weeks. I previously outlined my strategy for how to work from home, and this post is my strategy for what to work on.
-
Literature review
I like to keep across many areas that pique my interest. As a synthetic organometallic chemist, I enjoy reading about interesting new methodologies. As a physical chemist, I am interested in unusual kinetics (e.g. autocatalysis, vide infra). As an electrochemist, the latest innovations are fascinating - such as standardised electrochemistry kits. When I start a project on a new topic, or if it is otherwise sufficiently interesting, I do a broad review to get as up and running as I can as quickly as possible. I generally read a few reviews plus the most cited papers in the area, and then setup an alert to keep abreast of new articles. This keeps me reasonably informed about major changes in thinking about the area. Occasionally some thing fall through the cracks. I plan to pick one topic per week and spend a few hours bringing me up to speed. This will remind me of what I know, and reassure me that I am across that topic. -
Deep dive
At least once a year I like to do a deeper dive on a topic. Last year I had the opportunity to write my own short course, and decided to use the opportunity to learn a lot myself. I chose flow chemistry, and spent a few months getting my head well enough around the admittedly complex topic to feel confident that I knew the main players, the main viewpoints, and the main concepts as to when and how to use flow over batch chemistry. This year, I’m in the middle of a deep dive on autocatalysis. I have found that the hours of uninterrupted time allowed by WFH let me get far more out of reading papers than usual. Even though my attention doesn’t feel fractured by my colleagues, I suspect there is an extra level of concentration that lets me absorb the ideas more comprehensively. Don’t get me wrong, I have so far found it to be exhausting to critically read and summarise articles. I have found that I am using timeblocking to both define up front what I am doing (which frees up some mental capacity to do the task: no deciding moment to moment what I should be doing), but also allows me to decide how long, realistically, I can work on a given task. Three hours (three 50 minute pomodoro periods with 10 minute breaks between) is my current absolute limit before I’m unable to do any more hard mental work. I should note that I’m not just casually reading papers here, I’m reading to understand and challenge the content - and summarising the concepts and findings in my own words (stored in a digital zettelkasten - book link for context, and I use Zettlr). -
Data spring clean
Data. We all have it. Some sets are small, some very large. We shouldn’t be embarassed talking about it. My approach to data management is constantly evolving. Sometimes due to work related mandates (looking at you, electronic notebook). Sometimes as I get more comfortable with a new tool (git). The specifics of my current approach to data integrity and management are likely to be a topic for another post. Regardless, I will be giving my data a good dusting during this WFH month. I’ll make sure that the data from my experiments is present (i.e. I copied the files off of the instrument and put them in the right folder), and that relevant contextual notes are present so that another researcher can make sense of my files. There are some results that I’ll be replotting as I draft publications (see plan point 4), some that I’ll be moving from my assets to my archive as the related project has concluded (to be covered in a later post - idea from AE), and some that will need to be re-interrogated in light of new findings. An example of the re-interrogation is a catalytic cycle that I’m investigating. A co-ligand that I had assumed was inconsequential seems to be important, so I’ll be going back over the data to plot the ligand concentration with time and see if it correlates with the increasing rate of reaction. -
Draft as many publications as I can
I am currently at the tail end of my first post-doc position. With just under 5 months to go, I want to make sure that all of the (successful) work that I’ve done is communicated as soon as possible. My usual approach would be to work on a project, planning my next steps along with my thought processes outlined in a brief workflowy bullet. I do not usually try to write a manuscript draft until the bulk of the ‘story’ is worked out. However, given my inability to perform experiments at the moment, I plan to skeleton all of my current projects into publications. My approach is to process salient data into acceptable formats (e.g. I use excel for initial plots,1 and Origin once I know for sure the plot is going into the paper) and put the figures in a sensible order with some bullet points of discussion. This will show (and indeed has already shown) any missing experiments that are needed to ensure a watertight argument (a note of which goes onto my ‘to do once I can return to the lab’ list of next actions). -
Keep a sense of community
Until my first week of WFH, I did not realise quite how important the interactions with my colleagues were. From briefly discussing a weird result or hypothesis while making a coffee, to just generally interacting with other human beings. I am enjoying how productive I can be during the long stretches of uninterrupted time. I have also missed doing the crossword with my peers - a lunchtime staple in the before-times. To encourage some sense of normalcy/camaraderie, I have scheduled a recurring Friday lunchtime ‘crossword over zoom’ meeting. During the first one, my lab mates and I successfully completed the quick crossword, but we also shared how we’re going and had a much less formal exchange of ideas and information than the more regimented 1-2-1 catchups allow.
These projects will hopefully keep me productive (and sane) during the next month (minimum) of WFH.
-
Excel cops a lot of flack for being a bad program for data analysis. Whilst I agree it is not appropriate for running, say, a medical database of COVID-19 exposures, I think it is ideal for data integrity. Everyone is able to open the files (with excel or libre calc), and that is unlikely to change for years and years. So for raw data and preliminary analysis, I think excel is the perfect tool for the job. ↩