Metabolomics, peer review, and an ode to the Langendorff perfused heart

Finally, our metabolomics paper is in press at J. Mol. Cell. Cardiol.  (email if you want a reprint).  TL/DR… SIRT1 drives most (~85%) of the metabolic alterations that occur in the heart during acute ischemic preconditioning (IPC).

This was quite a tough paper to get published. We started the project in spring 2013, and wrote it up in fall 2014. It got rejected from a big journal (IF>15) first, then went 2 rounds at a mid-level (IF>10) journal before being rejected again, and then it went 2 rounds at JMCC before acceptance. All told, a year of back and forth with reviewers and editors.

The model system we used to investigate this topic was the Langendorff perfused mouse heart and splitomicin, a pharmacologic inhibitor of SIRT1. The basic issue with the reviews that ended up as rejections, was an insistence by reviewers that we do things in-vivo and using knockout mice.

Normally, we’re big fans of moving toward more physiologically-relevant model systems, but in this case there are very specific reasons to use a perfused heart and a pharmacologic inhibitor.  Here are some key points…

(1) Regarding pharmacology, the inhibitor we used is one we’d already shown can block acute IPC, so it’s a good candidate to test whether it also blocks the metabolic effects of IPC. Also, we had already shown that a SIRT1 KO mouse heart cannot be preconditioned, and that the endogenous protection seen in the SIRT1 over-expressing transgenic mouse can be blocked by 5 min. infusion of the inhibitor. Thus, the time-frame for the effects of SIRT1 in IPC is very short – on the order of 20 min. The SIRT1 KO mouse has known long-term metabolic alterations which would mask any changes we’d look for in IPC.

(2) Regarding in-vivo vs. in-vitro, it all boils down to sampling time. In our system, we can clamp the heart in liquid nitrogen Wollenberger tongs, straight off the perfusion rig. In effect, it goes from beating to frozen in less than a second. That’s important for getting reliable information on metabolites such as ATP, NADH, GSH and other labile redox things.

The problem is, when you precondition a mouse heart in-vivo, it’s a focal ischemia model. Only part of the heart is ischemic (the bit downstream of the vessel you occlude), so if you try to dissect out the ischemic zone, you delay the clamping by a couple of minutes and destroy all the labile metabolites during the dissection. Alternatively, if you clamp the whole heart right out of the animal into liquid nitrogen you create 2 problems… First, all the changes in the ischemic area get “diluted” with the other part of the heart that wasn’t ischemic (the so-called “area not at risk”).  Second, you’re also sampling blood, so you don’t know if the changes you see are in the myocardial tissue or the blood that comes along for the ride (by our estimates when you clamp a heart out of a mouse, about 1/3 of the sample is blood). In contrast, the perfused heart system has no blood, so the whole sample is myocardium. Also the entire heart is ischemic, so there’s no dilution.

(3) The other major issue concerns the type of metabolomics analysis you want to perform.  In this paper, we performed not only steady-state metabolomics (i.e., measuring the relative levels of metabolites), but also 13C labeled substrate tracing. The latter can yield proxy information about metabolic flux, which steady-state measurements cannot. This is easy in the perfused system… just throw 13C-glucose or 13C-palmitate in the perfusion media, but in-vivo this creates problems. You can’t just deliver labeled substrate to a whole mouse and assume it’s only being metabolized by the heart on first pass.  For example, the cardiac/liver Randle cycle can result in labeled glucose being turned to labeled fat by the liver, then sent to the heart as fuel. Also, whatever 13C-substrate you infuse is going to compete with endogenous blood-borne substrates in the animal. In the perfused system you can swap out the whole substrate (i.e., replace all the glucose with 13C-glucose), so you have much tighter control over delivery.

So, this really is one of those cases where an abstract application of Krogh’s principle comes into play.  The in-vitro and pharmacology based approach really was the best system available to answer the question at hand (that question being, what fraction of the metabolic changes that occur in acute IPC are governed by SIRT1 signaling?)

Naturally, we argued all the above points and it didn’t get us anywhere! As a lab that routinely uses both in-vivo and knockout models, it’s rather frustrating to be locked out of publishing in certain journals because we chose to use an allegedly inferior system. It’s annoying that some journals have a myopic focus on knockouts and in-vivo data which precludes them from publishing otherwise solid work.  Thankfully JMCC seems to have a more sensible approach to this type of work!

 

Mito ROS Slides

Last week I had the honor of being a speaker at the “MiP” (mitochondrial physiology) school, in Greenville NC. The event is one of a long series organized by Erich Gnaiger (inventor of the Oroboros Oxygraph2k respirometry apparatus).  The meeting included a series of methods workshops and scientific talks from abstracts, as well as didactic lectures based on a framework of the book “Bioenergetics 4“, one of whose authors (David Nicholls) gave several lectures.

My lecture was on “Mitochondrial ROS generation”, a seemingly massive topic which cannot really be covered in any depth in 45 minutes.  But anyway, here are the slides (PDF), in case anyone might find them useful.

July lab update

It’s been a slow summer so far in the lab, but we’ve had a few personnel changes and other things to update…

– Marcin Karcz finished his 1 year research residency funded by FAER, and has now moved on to a cardiac anesthesia fellowship at Columbia NY.

– We have a new lab’ tech, James Miller, who joins us from the Department for Oral Biology.

– Our new post-doc’ fellow Yves Wang started July 1.  Yves joins us from Case Western in Cleveland, where he’s worked on cardiac imaging until now.

– We’re having “fun” trying to get stuff published.  Currently we have 3 papers out there going through the wringer – one on anesthetic preconditioning, one on SIRT1 and cardiac metabolism, and another on autophagy.

– We filed a provisional patent on one of our drugs that protects the heart from IR injury when delivered at the moment of reperfusion, at a concentration of only 10 nM!  The paper on this is being written up for submission in the fall.

– Too many awesome meetings this year…. Jimmy & Sergiy at the AHA BCVS meeting in New Orleans this month. Paul & Owen are going to the TRiMAD meeting in State College PA this November. Paul is also presenting at the APS Physiological Bioenergetics meeting in Tampa in September, and the MiP bioenergetics workshop in Greenville NC in August, and attending the Society for Heart & Vascular Metabolism (SHVM) meeting in NY in October. That’s not counting the AHA sessions and SFRBM Annual meeting, both in November. This has been a seriously big year for conferences!

 

Cell asked for help – here you go…

Recently, Cell editor in chief Emilie Marcus posted an article from the perspective of an editor, decrying the recent uptick in allegations of data mis-handling that appear to be flooding editors’ desks in recent years, and asking what to do about it. I’ve already opined on how Cell royally screwed up this process in the past, so instead let’s focus on the actual questions asked, and some solutions. Here are the specific questions posed by Dr. Marcus…

 

(1) At a time when there is increasing pressure to reduce the costs of publishing, how should journals, institutions, and funders apportion time and resources to addressing a burgeoning number of alerts to potential instances of misrepresentation or misconduct?

Charge less and spend more?  Scientific publishing is a multi billion dollar industry, with profit margins routinely in the region of 35%. All the labor (i.e., peer review) is essentially free, and the actual production costs are miminal now that everything is on-line.

The issue here is that Cell and the other publishers want it both ways – they like the title “gatekeepers of truth”, but they don’t want to shell out the cash required to ensure that what they’re peddling is actually true!  Seriously, the answer to this one is so simple, it really questions the sanity of anyone asking it – SPEND MORE MONEY!  Hire more people to scrutinize the data before it goes out the door.  In the life sciences field in the US right now, we’re all bemoaning the glut of former grad students and post-doc’s struggling to find jobs. They have intimate knowledge of the subject matter. Hire them and you’d fix 2 problems in science.

 

(2) Are there ways to improve the signal-to-noise ratio that we haven’t thought of?

By signal to noise, one presumes Dr. Marcus is referring to the number of allegations that come in the door, but then don’t turn out to be real problem data.  Again this is a tractable problem.  More eyes on the data = easier to figure out what’s real.  Hire more trained eyes.

Another really simple solution is to USE THE TOOLS ALREADY AVAILABLE for plagiearism detection – things such as iThenticate and DejaVu.  It is shocking that we’ve had text plagiarism software now for well over a decade, but most journals simply don’t use it. Why? Because it costs money!  What’s interesting is that software tools are now also being developed to do the same thing for data and images (I know of some in the pipeline, but can’t mention specifics).

So, we are literally a couple of years away from the point when any submitted paper – both text and images – can be screened automatically using software. After that watershed moment, does anyone want to gamble how long it will take before such tools gain widespread adoption across the publishing industry?  Don’t hold your breath!

The other answer to this question is to think differently about the problem of signal:noise. In the old days, the only way to do it was boost the signal and cut the noise.  Computing power and labor are now so cheap, it’s easier to just do everything – take the signal, take the noise, take everything, and look at it all.  If there’s some noise in there, who cares? It’s actually more expensive to expend effort trying to figure out what’s noise. Just examine everything and THEN decide if it’s noise. You don’t gain anything by filtering out the noise first.

 

(3) Is there a process for investigating that would be more streamlined, coordinated and efficient?

Yes, see above. Software and more pairs of eyes. Spend more money (does anyone see a theme developing here?)  As regards efficiency, that fact that it was necessary to ask all of these questions tells you that the current system is not working. Quite frankly, anything the publishers do differently will be more efficient than the current approach.

 

(4) Would allowing/requiring authors to post all the raw data at the time of publication help?

Yes, both of these. It simply beggars belief that in 2015, when I can fit the entire back catalog of thousands of journals on a pocket-sized hard disk that costs <$100, journals are still on the fence about whether to allow authors to store all the data associated with a paper.  Hell, they’re still imposing limits on word count, pages, number of images etc.

Data is, quite literally, the cheapest possible thing in the world that you can store. Journals need to get out of the last century and embrace scientists’ wish to both include all their own data, and to see all the data in other people’s papers.

 

(5) Should we require that all whistleblowers be non-anonymous to ensure accountability? What if we enact this policy and need to let a serious claim go unaddressed because the whistleblower refuses to reveal their identity?

“Facts should be viewed as such, regardless where they came from”.  As a society, we are in grave danger when we attach relative importance to facts dependent on the perceived importance of the messenger. I experienced this first hand on PubPeer, when a scientist whose work I questioned went on a long diatribe about my own qualifications – as if that somehow changed the facts of the case.

Although the term used in Dr. Marcus question here was “accountability”, inherent is the assumption that anonymity equates to unreliability. There are anecdotes about the infamous Clare Francis being wrong much of the time, but that’s N=1 and I think we can do slightly better here. PubPeer has admirably demonstrated on thousands of occasions that anonymous reporters should be taken seriously, because they are very often right. Similarly, informants who use their real names are very often wrong. There is no hard evidence (to the best of my knowledge) that the reliability of allegations is correlated with the named status of the accuser.

The danger with a “named” approach is the slippery slope to an importance of the messenger prat-fall. Will a journal take an allegation more or less seriously if it comes from a post-doc’ versus a senior PI?  What about an undergrad?  What about a non-scientist member of the public? What if the accuser is a former employee of the paper’s author – does that somehow disqualify their opinion, or does it make their accusations more valid because they may have first hand knowledge of the case?  All these examples lead to a simple conclusion – identity does not matter.

If a journal chooses to assign “importance” to a series of allegations based on who they came from, one must assume that a similar biased system will exist at the other end of the investigation, i.e., the journal may choose to take allegations seriously or not depending on the status of the scientist who is being accused.  If a journal had a policy stating “we don’t investigate Nobel prize winners”, that would be offensive. Why is ignoring anonymous reporters any less offensive?  Both strategies attach undue importance to the messenger, not the facts.

 

(6) Should we only consider concerns related to papers published in the last 5 years? Seems fine for “small things” like gel splicing, etc., but presumably, if a concern arose that some body of work was fraudulent, even if it was 10 years old, wouldn’t we want to correct the published record?

A possible strategy might be to investigate everything fully within a given time frame (say 6 years – that’s the ORI statute of limitations), but then for older papers apply a graded approach dependent on the other papers from an author or group.

For example, if a paper from 10 years ago is questioned, and the problem is un-reported gel splicing, this may indeed represent a simple mistake by the authors who were simply following accepted (and now universally acknowledged to be wrong) contemporary practices.

However, that same paper juxtaposed against a back-drop of 20 similar papers, all with problems, perhaps with a few already retracted or corrected, suggests a pattern that may be indicative of misconduct or (at the very least), sloppy data handling habits.

___________

This last point is where another key solution comes in… COORDINATION.  None of the above proposals will work if each journal tries to implement them individually.  There has to be a database, shared between journals and publishers, to keep track of all these problems.  The simple idea would be as soon as an allegation comes in, the journal’s investigator would look up the authors in the database and see if any other journals had active on-going investigations about the same authors. Right now, doing a search on PubPeer, PubMedCommons, and Retraction Watch is a reasonable proxy for this, but far from comprehensive or perfect.

Finally, another potential solution that has been overlooked is the role that funding agencies have to play in this process. In case anyone didn’t notice, the NIH open access mandate was introduced in 2008, and sat pretty much ignored for a couple of years. Then NIH came up with a simple carrot/stick – if you don’t comply, you won’t get funding.  Boom!  Now everyone is super careful to publish their work in journals that comply.

What if NIH were to draft up a set of regulations for how journals ought to deal with these problems?  Not guidelines or recommendations but actual RULES… Want to publish your NIH-funded work? It has to be in a journal that plays by the rules, otherwise you can’t list it on your bio-sketch.  Just watch how quickly all the journals would fall into line. Their bread-and-butter is packaging up science they didn’t pay for and selling it back to the public, so if the hand that feeds them the bread says “jump”, they will jump. A mass exodus of researchers from journals that don’t comply, would be cool.  Maybe NIH could call the new mandate “Regulations On Biomedical Oversight Concerning Obsolete Publishing Practices”

 

COPE: Nothing more than a useless trade association

I’ve said it before and I’ll say it again – COPE (the Committee on Publication Ethics) is nothing more than a trade association / lobby group for the publishing industry.  Its real job is to provide a feel-good excuse for the multi-$bn publishing industry to say “hey look we’re doing something about ethics”, in return for subscription fees. In the same way that being listed in Who’s Who appeals to vain individuals, being listed as a COPE member buys journals a semblance of ethical credibility.

What if that credibility counts for nothing?  As reported yesterday by Neuroskeptic, a new study by Morten Oksvold found a shockingly low rate of response from journal editors when confronted with blatant evidence of data irregularities in over 40 papers spread across 3 journals.  The response rate?  Zero. Zilch. Nada. Niente. Nil.

Guess what? All 3 journals are COPE members! The COPE Code of Conduct specifically tells editors to respond stating what they plan to do in such cases. Ignoring such communications is a definite no no.

From my own experiences, this is a common outcome.  Just about every journal or major publisher is a member of COPE, and yet time-and-again we see COPE guidelines being openly flouted. In one of the cases listed in that post (J. Neurosci. paper) I’ve been waiting over 2 years for the publisher to get their proverbial feces together. Last fall the case was raised to the level of a formal investigation by the ethics committee of the Society for Neuroscience, but they’ve stopped responding to my emails, despite me CC’ing COPE. The burden for ensuring that alleged data problems are dealt with in a timely manner falls firmly at the feet of the journals and their so-called trade association. It should not require Herculean efforts on the part of bloggers.  We know how to do this stuff properly – it just requires lazy editors to do their damn jobs!

What are the consequences for a journal or editor, if a breach of the COPE guidelines occurs?  Well, based on the Cell case I outlined in that previous post, there were none. The editor still has her job. There was no formal public announcement that the COPE guidelines had been breached. No indication that the person or persons behind the blatant conflict-of-interest suffered any negative effects whatsoever. A simple email to me (the complainant) stating that “procedures will be reviewed and improved”, and we all move on pretending this is fixed, and won’t happen again.

The underlying issue here is that COPE doesn’t have any teeth. All of the power is held by the journals, and COPE is their obedient little lap dog. When journals screw up, COPE could threaten to rescind their membership, but who in their right mind is going to challenge a multi-$bn giant such as Elsevier?

As scientists, we need to be frank about the reality of the relationship between the publishing industry and COPE.  If we want ethics cases to be handled properly, squealing to a pay-to-play vanity club is not the answer. COPE has consistently proven that they don’t have the power to change deeply entrenched behavior by editors. In contrast, taking matters into our own hands by using social media and sites such as PubPeer, continues to be an effective strategy to get results.