Go to the U of M home page
School of Physics & Astronomy
Assay and Acquisition of Radiopure Materials

User Tools


aaac:doe_cosmic_frontier:summary2014

DOE COSMIC FRONTIER NOTES (P. Cushman12/15/14)

Summary of the data presented in talks by Patwa and Turner at the August 2014 PI Meeting. In addition, some thoughts on how the review process itself impacts number of proposals and reviewer load.

DOE Comparative review Process (started 3 cycles ago)

  • Segregation into frontiers
    • A. Disadvantages cross-cutting proposals, especially CF vs neutrinos, and maybe detector R&D. Synergies across frontiers hard to point out.
    • B. Difficult to disentangle big grants with multiple players
    • C. Has multiplied number of reviews done each year and multiplied number of reviewers needed.
    • D. need more guidance on the big grants – logical divisions and some sort of summary that details overlap - Perhaps they need a template.
  • Review effect of topical divisions in other agencies – is there an optimal size?
  • New concentration on PIs, each one is ranked.
  • Page limit is 9 per PI (this is good! Used to be unlimited! What is it elsewhere? )
  • Notice many proposals from same person
    • Across agencies – can these be sorted into appropriate agency ahead of time?
    • Different topics to same reviewers – makes the reviewers have to choose.
  • Broad discussions among reviewers is good. Tends to encourage young researchers, some budgetary info gleaned from reviewers (but final word is DOE), COI not applied strictly – very good since reviewers tend to be fair and as long as they are upfront with affiliations, others can take their recommendations with a grain of salt.
  • Can you find enough reviewers? Balanced set? Is it getting harder to find reviewers?

2014 Data

  • Cosmic: 28 proposals (including parts of umbrella grants) out of 130 across all HEP. 21% of the 6 frontiers are to CF, so the number of proposals is somewhat higher than average. We could use this to normalize HEP data or we could request this data broken down for only CF. We could request the same data for all years, including this year.
  • The number of reviewers in HEP (Cosmic): 127 (27 CF) is proportional to the fraction of proposals received.
  • There were 571 (120 CF) reviews: 4.6 reviews per proposal.
  • CF: Success rate for renewal (new) was 100% (36%)
  • All HEP: Success rates are 85% (24%) for 2014 and 78% (34%) for 2013
    • New proposals are not doing as well as renewals, and it is worse in the cosmic frontier. Are these “practice” proposals that will do better the second time around?
  • In general, CF is attracting more new proposals as people switch between frontiers.
  • Anecdotally Energy ⇒ Cosmic. Can we get numbers for this? Do the number of proposals go down in one frontier when the other ones go up?
  • Budget info is lacking. High success rate at DOE is due to providing lower funding than requested. What is the percent reduction in funding across labs, univ, and new vs old ?
  • Jr Faculty in CF: only 1 funded for 9 reviewed. These are all NEW (Is this the DE vs Astro dilemma?) At 1% this is much lower than HEP average of Jr faculty: 90% (48%)
  • Research Scientists: are complicated by fractions on many different proposals. For CF: 78% (by task?) and no new ones.

Early Career

  • 2-step procedure: written review (specialized??) down-selects first, then to review panel for only top third. How is this working?
  • All frontiers in same panel. Lab/Univ by same panel. Comment on how well this works.
  • 5% success rate – encouraged to also apply to comp. review (but that is BEFORE career – confusing)
  • Higher funding level for labs, but roughly the same number in both Lab and Univ.
  • Generally even across frontiers (except theory is almost x2, mostly from small University grants)
  • Early Career used to be a gateway to becoming a PI on a grant. However, this must not be the case now that it is a 5% success rate. How has this impacted young researchers? How can we find data to understand this?
  • The Career review is AFTER the comparative review
    • 1. makes it difficult to judge a comparative review without knowing outcome.
    • 2. Doubles the number of proposals – they are encouraged to do both.
    • 3. If Career came first, then if it didn’t get funded, the PI could submit to comparative review .
aaac/doe_cosmic_frontier/summary2014.txt · Last modified: 2014/12/18 17:30 by prisca