Cell asked for help – here you go…

Recently, Cell editor in chief Emilie Marcus posted an article from the perspective of an editor, decrying the recent uptick in allegations of data mis-handling that appear to be flooding editors’ desks in recent years, and asking what to do about it. I’ve already opined on how Cell royally screwed up this process in the past, so instead let’s focus on the actual questions asked, and some solutions. Here are the specific questions posed by Dr. Marcus…


(1) At a time when there is increasing pressure to reduce the costs of publishing, how should journals, institutions, and funders apportion time and resources to addressing a burgeoning number of alerts to potential instances of misrepresentation or misconduct?

Charge less and spend more?  Scientific publishing is a multi billion dollar industry, with profit margins routinely in the region of 35%. All the labor (i.e., peer review) is essentially free, and the actual production costs are miminal now that everything is on-line.

The issue here is that Cell and the other publishers want it both ways – they like the title “gatekeepers of truth”, but they don’t want to shell out the cash required to ensure that what they’re peddling is actually true!  Seriously, the answer to this one is so simple, it really questions the sanity of anyone asking it – SPEND MORE MONEY!  Hire more people to scrutinize the data before it goes out the door.  In the life sciences field in the US right now, we’re all bemoaning the glut of former grad students and post-doc’s struggling to find jobs. They have intimate knowledge of the subject matter. Hire them and you’d fix 2 problems in science.


(2) Are there ways to improve the signal-to-noise ratio that we haven’t thought of?

By signal to noise, one presumes Dr. Marcus is referring to the number of allegations that come in the door, but then don’t turn out to be real problem data.  Again this is a tractable problem.  More eyes on the data = easier to figure out what’s real.  Hire more trained eyes.

Another really simple solution is to USE THE TOOLS ALREADY AVAILABLE for plagiearism detection – things such as iThenticate and DejaVu.  It is shocking that we’ve had text plagiarism software now for well over a decade, but most journals simply don’t use it. Why? Because it costs money!  What’s interesting is that software tools are now also being developed to do the same thing for data and images (I know of some in the pipeline, but can’t mention specifics).

So, we are literally a couple of years away from the point when any submitted paper – both text and images – can be screened automatically using software. After that watershed moment, does anyone want to gamble how long it will take before such tools gain widespread adoption across the publishing industry?  Don’t hold your breath!

The other answer to this question is to think differently about the problem of signal:noise. In the old days, the only way to do it was boost the signal and cut the noise.  Computing power and labor are now so cheap, it’s easier to just do everything – take the signal, take the noise, take everything, and look at it all.  If there’s some noise in there, who cares? It’s actually more expensive to expend effort trying to figure out what’s noise. Just examine everything and THEN decide if it’s noise. You don’t gain anything by filtering out the noise first.


(3) Is there a process for investigating that would be more streamlined, coordinated and efficient?

Yes, see above. Software and more pairs of eyes. Spend more money (does anyone see a theme developing here?)  As regards efficiency, that fact that it was necessary to ask all of these questions tells you that the current system is not working. Quite frankly, anything the publishers do differently will be more efficient than the current approach.


(4) Would allowing/requiring authors to post all the raw data at the time of publication help?

Yes, both of these. It simply beggars belief that in 2015, when I can fit the entire back catalog of thousands of journals on a pocket-sized hard disk that costs <$100, journals are still on the fence about whether to allow authors to store all the data associated with a paper.  Hell, they’re still imposing limits on word count, pages, number of images etc.

Data is, quite literally, the cheapest possible thing in the world that you can store. Journals need to get out of the last century and embrace scientists’ wish to both include all their own data, and to see all the data in other people’s papers.


(5) Should we require that all whistleblowers be non-anonymous to ensure accountability? What if we enact this policy and need to let a serious claim go unaddressed because the whistleblower refuses to reveal their identity?

“Facts should be viewed as such, regardless where they came from”.  As a society, we are in grave danger when we attach relative importance to facts dependent on the perceived importance of the messenger. I experienced this first hand on PubPeer, when a scientist whose work I questioned went on a long diatribe about my own qualifications – as if that somehow changed the facts of the case.

Although the term used in Dr. Marcus question here was “accountability”, inherent is the assumption that anonymity equates to unreliability. There are anecdotes about the infamous Clare Francis being wrong much of the time, but that’s N=1 and I think we can do slightly better here. PubPeer has admirably demonstrated on thousands of occasions that anonymous reporters should be taken seriously, because they are very often right. Similarly, informants who use their real names are very often wrong. There is no hard evidence (to the best of my knowledge) that the reliability of allegations is correlated with the named status of the accuser.

The danger with a “named” approach is the slippery slope to an importance of the messenger prat-fall. Will a journal take an allegation more or less seriously if it comes from a post-doc’ versus a senior PI?  What about an undergrad?  What about a non-scientist member of the public? What if the accuser is a former employee of the paper’s author – does that somehow disqualify their opinion, or does it make their accusations more valid because they may have first hand knowledge of the case?  All these examples lead to a simple conclusion – identity does not matter.

If a journal chooses to assign “importance” to a series of allegations based on who they came from, one must assume that a similar biased system will exist at the other end of the investigation, i.e., the journal may choose to take allegations seriously or not depending on the status of the scientist who is being accused.  If a journal had a policy stating “we don’t investigate Nobel prize winners”, that would be offensive. Why is ignoring anonymous reporters any less offensive?  Both strategies attach undue importance to the messenger, not the facts.


(6) Should we only consider concerns related to papers published in the last 5 years? Seems fine for “small things” like gel splicing, etc., but presumably, if a concern arose that some body of work was fraudulent, even if it was 10 years old, wouldn’t we want to correct the published record?

A possible strategy might be to investigate everything fully within a given time frame (say 6 years – that’s the ORI statute of limitations), but then for older papers apply a graded approach dependent on the other papers from an author or group.

For example, if a paper from 10 years ago is questioned, and the problem is un-reported gel splicing, this may indeed represent a simple mistake by the authors who were simply following accepted (and now universally acknowledged to be wrong) contemporary practices.

However, that same paper juxtaposed against a back-drop of 20 similar papers, all with problems, perhaps with a few already retracted or corrected, suggests a pattern that may be indicative of misconduct or (at the very least), sloppy data handling habits.


This last point is where another key solution comes in… COORDINATION.  None of the above proposals will work if each journal tries to implement them individually.  There has to be a database, shared between journals and publishers, to keep track of all these problems.  The simple idea would be as soon as an allegation comes in, the journal’s investigator would look up the authors in the database and see if any other journals had active on-going investigations about the same authors. Right now, doing a search on PubPeer, PubMedCommons, and Retraction Watch is a reasonable proxy for this, but far from comprehensive or perfect.

Finally, another potential solution that has been overlooked is the role that funding agencies have to play in this process. In case anyone didn’t notice, the NIH open access mandate was introduced in 2008, and sat pretty much ignored for a couple of years. Then NIH came up with a simple carrot/stick – if you don’t comply, you won’t get funding.  Boom!  Now everyone is super careful to publish their work in journals that comply.

What if NIH were to draft up a set of regulations for how journals ought to deal with these problems?  Not guidelines or recommendations but actual RULES… Want to publish your NIH-funded work? It has to be in a journal that plays by the rules, otherwise you can’t list it on your bio-sketch.  Just watch how quickly all the journals would fall into line. Their bread-and-butter is packaging up science they didn’t pay for and selling it back to the public, so if the hand that feeds them the bread says “jump”, they will jump. A mass exodus of researchers from journals that don’t comply, would be cool.  Maybe NIH could call the new mandate “Regulations On Biomedical Oversight Concerning Obsolete Publishing Practices”


COPE: Nothing more than a useless trade association

I’ve said it before and I’ll say it again – COPE (the Committee on Publication Ethics) is nothing more than a trade association / lobby group for the publishing industry.  Its real job is to provide a feel-good excuse for the multi-$bn publishing industry to say “hey look we’re doing something about ethics”, in return for subscription fees. In the same way that being listed in Who’s Who appeals to vain individuals, being listed as a COPE member buys journals a semblance of ethical credibility.

What if that credibility counts for nothing?  As reported yesterday by Neuroskeptic, a new study by Morten Oksvold found a shockingly low rate of response from journal editors when confronted with blatant evidence of data irregularities in over 40 papers spread across 3 journals.  The response rate?  Zero. Zilch. Nada. Niente. Nil.

Guess what? All 3 journals are COPE members! The COPE Code of Conduct specifically tells editors to respond stating what they plan to do in such cases. Ignoring such communications is a definite no no.

From my own experiences, this is a common outcome.  Just about every journal or major publisher is a member of COPE, and yet time-and-again we see COPE guidelines being openly flouted. In one of the cases listed in that post (J. Neurosci. paper) I’ve been waiting over 2 years for the publisher to get their proverbial feces together. Last fall the case was raised to the level of a formal investigation by the ethics committee of the Society for Neuroscience, but they’ve stopped responding to my emails, despite me CC’ing COPE. The burden for ensuring that alleged data problems are dealt with in a timely manner falls firmly at the feet of the journals and their so-called trade association. It should not require Herculean efforts on the part of bloggers.  We know how to do this stuff properly – it just requires lazy editors to do their damn jobs!

What are the consequences for a journal or editor, if a breach of the COPE guidelines occurs?  Well, based on the Cell case I outlined in that previous post, there were none. The editor still has her job. There was no formal public announcement that the COPE guidelines had been breached. No indication that the person or persons behind the blatant conflict-of-interest suffered any negative effects whatsoever. A simple email to me (the complainant) stating that “procedures will be reviewed and improved”, and we all move on pretending this is fixed, and won’t happen again.

The underlying issue here is that COPE doesn’t have any teeth. All of the power is held by the journals, and COPE is their obedient little lap dog. When journals screw up, COPE could threaten to rescind their membership, but who in their right mind is going to challenge a multi-$bn giant such as Elsevier?

As scientists, we need to be frank about the reality of the relationship between the publishing industry and COPE.  If we want ethics cases to be handled properly, squealing to a pay-to-play vanity club is not the answer. COPE has consistently proven that they don’t have the power to change deeply entrenched behavior by editors. In contrast, taking matters into our own hands by using social media and sites such as PubPeer, continues to be an effective strategy to get results.

University admin’ run amok (this time it’s IT)

It’s been a while since I wrote about the ridiculous administrative burden that is gradually sucking. the. will. to. live. of everyone in academia.  Today I want to focus on a specific example… what happens when the IT people take a reasonably simple task, and strangle it the complete fuck to death?

It starts out with a relatively simple problem… we use animals in my lab’,  so we’re required to have animal protocols in place. The body administering these protocols is called UCAR – the University Committee on Animal Resources [Fun fact – it’s almost always called IACUC at other institutions, but our UCAR is actually one of the oldest, so its been around since well before that naming trend emerged].

One of my animal protocols is up for annual renewal – a simple series of 3 questions that could be handled in a 30 second email – Is the protocol still active? Has anything changed since last year?  How many animals have been bred and used since the last renewal?  There’s a far more comprehensive review process every 3 years, or if you need to change/add anything, but so long as you’re just trucking along and everything is in compliance then the annual protocol renewal is as close as it gets to a rubber stamp affair – until the IT department gets involved…

UCAR has a simply wonderful (!) online interface for dealing with submission and processing of protocols. It’s called TOPAZ – but don’t click that link in Firefox because it will crash your browser, even if you have the correct Silverlight plug-in installed!  One time, TOPAZ went down “for weekend maintenance”, and when it came back on-line the following Thursday, all the menus were in German. The site can best be described as a total clusterfuck – pasting text in there from other documents causes huge formatting problems; menu scrolling is a disaster; navigation to utterly counter-intuitive. All really simple “web design 101” stuff, but when you have a virtual monopoly on this sort of product, you don’t have to give a shit about the customer experience!

But today that’s not the problem.  Oh no, today I had to spend 15 minutes trying to find the frickin’ link to TOPAZ (yeah yeah I should have it bookmarked, but I just replaced my computer and didn’t migrate everything yet).  Anyway…

The Website Overhaul
Until recently, there used to be a simple web interface at URMC, but recently the IT folks have been overhauling everything. Something to do with branding and other concepts way above my pay grade. As highlighted by this XKCD comic, University websites are renowned for providing zero of the information people actually visit them for.  At URMC, they’ve taken it to a whole ‘nuther level. This is what you see at the main page…
urmc main page grab

That’s a normal size browser window, taking up 2/3 the width of a 1440×900 wide-screen monitor. See any menu bars?  See anything worth clicking if you actually WORK at the place?  No. To get to the good stuff you have to scroll past all the PR bunk to the end of the page…
.urmc main page grab lower

See anything there about resources or other useful links for researchers (such as a link to UCAR?). BOOM!  Of course, how could I have missed it?  See that little thing at the top right of the main page (in the last-but-one image)? That’s a menu link. Someone (probably working on a 21″ iMac) thought it would be a good idea to have the menu options that appear on a regular webpage collapse into a small icon if the browser is below a certain width. If only I’d widened my browser to full screen width, this is what I would have seen at the top of the page…
urmc main page grab wide

OK, let’s follow that “research” link, and go to Resources for Researchers. Nope, nothing there.  What about Shared Resource Laboratories? No, that’s all the core facilities. What about the listing of Departments and Centers?  Well it’s not under U for UCAR. Maybe A for animal? Bingo – Animal Resource, and it has a link to UCAR’s page…
UCAR main page

But is there a link to TOPAZ? No. Maybe it’s in that menu thing over on the top right? No. Oh, but wait, TOPAZ only works with Internet Explorer, let’s fire that up before we get too far down the rabbit hole in FireFox.
ucar in ie

Same deal.  Oh but look, in addition to that menu thing in the top right, there’s a little “+” sign. It wasn’t there in FireFox (see above). Hmm… wonder what that does?
ucar auprot menu

Right there – 4th item down the menu – “Animal Use Protocols”.  Click that, and you get a page describing protocols, but still no link to the submission site!  Oh, but now you’re on the protocol page, go back up to the “+” sign again and click it to expand.. Now there’s a new menu item below Animal Use Protocols… “Submit Protocol Online”  It wasn’t there before.
ucar submit protocol link

OK. Clicking that link brings you to this page, where you can click the link to log into TOPAZ.  If you’re lucky, TOPAZ might launch the first time, or it might crash, but it usually works the second or 3rd time.

Oh but it’s not over yet.
Having wasted my time on this, I decided to file a complaint with web services. (another 5 min. to find the appropriate link, since it’s not listed under any of the obvious headings such as IT or Computing).

As expected, there’s no number to call, no contact email address, just a button to click to File a Support Ticket. That brings up this window, which requires log-in (same user ID as for TOPAZ, which incidentally is the same user ID for email, the HR system, a bunch of other internal sites, plus WiFi access – can you say security risk?)  Anyway, you click “New Ticket” and the browser crashes! Mother fucker!
web services support ticket

Some people question why I run my own lab’ website instead of entrusting it to the institution.  In future I will simply direct them to this blog post.

More super happy fun admin times as a professor…
(1) We’re trying to hire a new post-doc’ who’s on an H1B visa. By last count I’m up to 52 emails between myself, the Office of Postdoctoral Affairs, my Department, and the International Services Office.

(2) We also recently hired a new lab tech’ from another lab’ where the PI is moving away. The logistics of doing this were a total nightmare, including a lovely 45 minute ‘phone call with HR to be briefed on the nuances of NY state labor law. Whether this person will get their first paycheck on time is still up in the air.

(3) A lag in ledger reporting on a soon-to-finish grant, led what was thought to be a $5k surplus in need of urgent spend-down, turning into a $4k deficit in the space of a week. The phrase “it hasn’t hit the ledgers yet” keeps me awake at night.

(4) All the same old crap I wrote about before is still there piling up, getting in the way of my ability to actually DO science.

We all have a duty, as scientists and university faculty, to fight this continual onslaught of administrative BS.  If you haven’t read this book, do so, and get angry.


The end of an era, and the beginning of several new ones…

This spring there have been a lot of changes around the lab…

– Our technician Bill Urciuoli graduated his MBA from the Simon School of Business at UR (tuition benefits FTW), and left us in April to start a new job in Williamsport PA.  We will have a few weeks without a technician (oh joy!) and our new Tech’ will hopefully be starting on June 1.

– Long-term colleague Dr. Chad Galloway has started a new job as a staff scientist in the Department of Ophthalmology here at URMC. Here’s a picture of Bill and Chad at their leaving party – the bucket on the floor is part of a beer brewing kit they each got as a leaving gift.


– We’ve also had an RIT co-op student in the lab’ for the past 3 months.  Nick Gulati has been doing a lot of 3D printing, and we recently obtained some electrically-conductive printer filament from Proto-Pasta, which he’s been using to print custom electrodes (more news on this soon).

– Our multi-PI R01 (with Keith Nehrke from Medicine and Cole Haynes from Sloan Kettering) got funded!  The project is entitled “Role of the mitochondrial UPR in ischemic protection”, and as the name suggests, will seek to characterize the key players in the mitochondrial unfolded protein response, and how activating this pathway might be able to protect the heart from ischemia-reperfusion injury.

– Andrew Wojtovich (former psot-doc’ and now an R.A.P. in the Department of Medicine) got a fundable score on his first R01, entitled “Optogenetic control of mitochondrial ROS generation”. Andrew was also selected to give a talk at the annual Biochemistry Department retreat.

– We were privileged to host Aubrey DeGrey of the SENS research foundation, for a seminar. I’ve known Aubrey since >20 years ago from the UK, so it was great to catch up on old times and hear all about his exciting recent advances in aging research.

– We had a visit from former grad’ student Dr. Lindsay Burwell, who reports that she’s going to start a new position as an Assistant Professor of Chemistry at Wells College this fall.


3D printed gel combs

One of the ideas out there in the “3Dprintosphere”, is that seemingly common plastic doo-dads are way too expensive, and would be a lot cheaper if they could be custom fabricated on-site.  A classic example is gel combs – those little plastic things we all use to form wells in our SDS-PAGE gels. For the privilege of owning one of these small pieces of plastic, a reputable manufacturer of such mini-gel apparatus will charge $37 for a pack of 5. Add in shipping costs and you’re looking at $10 a pop, for something that costs maybe 10c to make.

So, SketchUp, Repetier, and PrintrBot to the rescue…


That’s a custom 7-well 1.5mm comb. The reason we did this is to load more sample.  A regular 10-well comb has 10mm deep wells, 5mm wide, so they hold ~75μl each. These wells are 12mm deep and 7mm wide, so they hold 126μl each (previously we had to use tape to join together 2 wells of a 15 well comb to make a wide lane).

This one was printed at 0.15mm layer height using MakerBot PLA filament at 225C with solid rectilinear infill. It took about 10 minutes to design and 16 minutes to print, and uses about 900mm of filament, so assuming a cost of 13c per meter** that’s a 12c material cost. If it breaks or wears out, who cares?  We could print a brand new one every month for 6 years still be ahead on cost. And we can customize the size for whatever sample is needed.

The STL file for this, plus those for standard 10 well and 15 well combs (for the mini-gel box maker known as “big green”), are in this zip folder. I also threw in the Sketchup file with a blank comb-body so you can draw in the lines and use the push/pull tool to design custom well sizes, as we did here for the 7 well version.



**PLA density = 1.24kg per liter
A 1kg roll costs ~$45 depending on where you source it
Diameter=1.75mm, radius 0.875mm, pi R squared and all that malarky, so 1 meter = 2.405 cm3
This 2.405cm3 weighs 2.982 grams.
So a 1kg spool has 335 meters, i.e. 13.4c per meter