Today is my last day as Director of NIGMS. It is hard to believe that almost 8 years have passed since I was first offered this tremendous opportunity to serve the scientific community. It has been a privilege to work with the outstanding staff members at NIGMS and NIH, as well as with so many of you across the country.

As I write my final post, I find myself recalling a statement I heard from then-NIH Director Elias Zerhouni during my first few years here: It is very difficult to translate that which you do not understand. He made this comment in the context of discussions about the balance between basic and applied research, which certainly has applicability in this setting and is relevant in a broader context as well. In some ways, it has also been my mantra for the NIGMS Feedback Loop.


Jeremy Loops What Would I Know Mp3 Download


Download 🔥 https://urllio.com/2yGADc 🔥



I am pleased with our progress toward this goal, but there is considerable room for further evolution. The emergence and success of similar blogs such as Rock Talk are encouraging signs. I know that NIGMS Acting Director Judith Greenberg shares my enthusiasm for communication with the community, and I hope that the new NIGMS Director will too. I encourage you to continue to play your part, participate in the discussions and engage in the sort of dialogue that will best serve the scientific community.

In a previous post, I described some initial results from an analysis of the relationships between a range of productivity metrics and peer review scores. The analysis revealed that these productivity metrics do correlate to some extent with peer review scores but that substantial variation occurs across the population of grants.

Here, I explore these relationships in more detail. To facilitate this analysis, I separated the awards into new (Type 1) and competing renewal (Type 2) grants. Some parameters for these two classes are shown in Table 1.

To better visualize trends in the productivity metrics data in light of the large amounts of variability, I calculated running averages over sets of 100 grants separately for the Type 1 and Type 2 groups of grants, shown in Figures 1-3 below.

These graphs show somewhat different behavior for Type 1 and Type 2 grants. For Type 1 grants, the curves are relatively flat, with a small decrease in each metric from the lowest (best) percentile scores that reaches a minimum near the 12th percentile and then increases somewhat. For Type 2 grants, the curves are steeper and somewhat more monotonic.

Note that the curves for the number of highly cited publications for Type 1 and Type 2 grants are nearly superimposable above the 7th percentile. If this metric truly reflects high scientific impact, then the observations that new grants are comparable to competing renewals and that the level of highly cited publications extends through the full range of percentile scores reinforce the need to continue to support new ideas and new investigators.

While these graphs shed light on some of the underlying trends in the productivity metrics and the large amount of variability that is observed, one should be appropriately cautious in interpreting these data given the imperfections in the metrics; the fact that the data reflect only a single year; and the many legitimate sources of variability, such as differences between fields and publishing styles.

NIGMS is committed to thoughtful analysis before it initiates new programs and to careful, transparent assessment of ongoing programs at appropriate stages to help determine future directions. In this spirit, we have been assessing our large-scale programs, starting with the Protein Structure Initiative and more recently, the glue grant program.

First, these grants were always intended to be experiments in the organization of integrative science. One of the most striking features of the glue grants is how different they are from one another based on the scientific challenge being addressed, the nature of the scientific community in which each glue grant is embedded and the approach of the principal investigator and other members of the leadership team. The process evaluation expressed major concern about such differences between the glue grants, but in fact this diversity reflects a core principle of NIGMS: a deep appreciation for the role of the scientific community in defining scientific problems and identifying approaches for addressing them.

Second, as highlighted in both reports, the need for rapid and open data sharing remains a great challenge. All of the glue grants have included substantial investments in information technology and have developed open policies toward data release. However, making data available and successfully sharing data in forms that the scientific community will embrace are not equivalent. And of course, effective data and knowledge sharing is a challenge throughout the scientific community, not just in the glue grants.

Third, the timing for these assessments is challenging. On one hand, it is desirable to perform the assessments as early as possible after the initiation of a new program to inform program management and to indicate the need for potential adjustments. On the other hand, the impact of scientific advances takes time to unfold and this can be particularly true for ambitious, larger-scale programs. It may be interesting to look at the impact of these programs again in the future to gauge their impact over time.

The analysis I discuss below reveals that peer review scores do predict trends in productivity in a manner that is statistically different from random ordering. That said, there is a substantial level of variation in productivity metrics among grants with similar peer review scores and, indeed, across the full distribution of funded grants.

I analyzed 789 R01 grants that NIGMS competitively funded during Fiscal Year 2006. This pool represents all funded R01 applications that received both a priority score and a percentile score during peer review. There were 357 new (Type 1) grants and 432 competing renewal (Type 2) grants, with a median direct cost of $195,000. The percentile scores for these applications ranged from 0.1 through 43.4, with 93% of the applications having scores lower than 20. Figure 1 shows the percentile score distribution.

The numbers of publications and citations represent the simplest available metrics of productivity. More refined metrics include the number of research (as opposed to review) publications, the number of citations that are not self-citations, the number of citations corrected for typical time dependence (since more recent publications have not had as much time to be cited as older publications), and the number of highly cited publications (which I defined as the top 10% of all publications in a given set). Of course, the metrics are not independent of one another. Table 1 shows these metrics and the correlation coefficients between them.

As could be anticipated, there is substantial scatter across each distribution. However, as could also be anticipated, each of these metrics has a negative correlation coefficient with the percentile score, with higher productivity metrics corresponding to lower percentile scores, as shown in Table 2.

Do these distributions reflect statistically significant relationships? This can be addressed through the use of a Lorenz curve to plot the cumulative fraction of a given metric as a function of the cumulative fraction of grants, ordered by their percentile scores. Figure 5 shows the Lorentz curve for citations.

The tendency of the Lorenz curve to reflect a non-uniform distribution can be measured by the Gini coefficient . This corresponds to twice the shaded area in Figure 5. For citations, the Gini coefficient has a value of 0.096. Based on simulations, this coefficient is 3.5 standard deviations above that for a random distribution of citations as a function of percentile score. Thus, the relationship between citations and the percentile score for the distribution is highly statistically significant, even if the grant-to-grant variation within a narrow range of percentile scores is quite substantial. Table 3 shows the Gini coefficients for the all of the productivity metrics.

Of these metrics, overall citations show the most statistically significant Gini coefficient, whereas highly cited publications show one of the least significant Gini coefficients. As shown in Figure 4, the distribution of highly cited publications is relatively even across the entire percentile score range.

We have just posted the final version of Investing in the Future: National Institute of General Medical Sciences Strategic Plan for Biomedical and Behavioral Research Training. Fifteen months in the making, the plan reflects our long-standing commitment to research training and the development of a highly capable, diverse scientific workforce.

My thanks to the dedicated committee of NIGMS staff who developed the plan and to the hundreds of investigators, postdocs, students, university administrators and others who took the time to give us their views throughout the planning process.

This is a good reminder to all of us in the scientific community about our responsibility to reach out broadly to explain what we do and why we do it in understandable terms that can inform the public and potentially inspire new members of future generations of scientists.

I have previously noted that NIH has proposed creating a new entity, the National Center for Advancing Translational Sciences (NCATS), to house a number of existing programs relating to the discipline of translational science and the development of novel therapeutics. Plans for NCATS have been coupled to a proposal to dismantle the National Center for Research Resources (NCRR), in part because the largest program within NCRR, the Clinical and Translational Science Awards, would be transferred to NCATS and in part because of a statutory limitation on the number of institutes and centers at NIH.

As you may be aware, I have expressed concerns about the processes associated with the proposal to abolish NCRR. I hope it is clear that my concerns relate to the processes and not to the NCRR programs, which I hold in very high regard. This opinion is also clearly shared by many others in the scientific community, based on comments on the Feedback NIH site and in other venues. 152ee80cbc

kwgt kustom widget pro key mod apk download

arrow season 3 english subtitles download

sportzfy tv apk v4 2 download latest version