Go to the U of M home page
School of Physics & Astronomy
Assay and Acquisition of Radiopure Materials

User Tools


aaac:doe_cosmic_frontier:summary2014

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

aaac:doe_cosmic_frontier:summary2014 [2014/12/18 17:16] – created priscaaaac:doe_cosmic_frontier:summary2014 [2014/12/18 17:30] (current) prisca
Line 1: Line 1:
 ==== DOE COSMIC FRONTIER NOTES (P. Cushman12/15/14)==== ==== DOE COSMIC FRONTIER NOTES (P. Cushman12/15/14)====
-How does the review process itself impact number of proposals and reviewer load?+Summary of the data presented in talks by Patwa and Turner at the August 2014 PI Meeting. In addition, some thoughts on how  the review process itself impacts number of proposals and reviewer load.
  
-=== Comparative review Process ===+=== DOE Comparative review Process (started 3 cycles ago) ===
   * Segregation into frontiers    * Segregation into frontiers 
-    * A.  Disadvantages  cross-cutting proposals, especially CF vs neutrinos, and  +    * A. Disadvantages  cross-cutting proposals, especially CF vs neutrinos, and maybe detector R&D.  Synergies across frontiers hard to point out.
-                   maybe detector R&D.  Synergies across frontiers hard to point out.+
     * B. Difficult to disentangle big grants with multiple players      * B. Difficult to disentangle big grants with multiple players 
-    * C.  Has multiplied number of reviews done each year and multiplied number  +    * C.  Has multiplied number of reviews done each year and multiplied number of reviewers needed.  
-                   of reviewers needed.  +    * D. need more guidance on the big grants – logical divisions and some sort of summary that details overlap - Perhaps they need a template.
-    * D. need more guidance on the big grants – logical divisions and some sort of  summary that details overlap - Perhaps they need a template.+
  
   * Review effect of topical divisions in other agencies – is there an optimal size?    * Review effect of topical divisions in other agencies – is there an optimal size? 
Line 22: Line 20:
 === 2014 Data ===  === 2014 Data === 
  
-  * Cosmic:  28 proposals out of  130 across all HEP.  (21% of the 6 frontiers are to CF+  * Cosmic:  28 proposals (including parts of umbrella grants) out of 130 across all HEP.  21% of the 6 frontiers are to CF, so the number of proposals is somewhat higher than average. We could use this to normalize HEP data or we could request this data broken down for only CF.  We could request the same data for all years, including this year
-(umbrella proposals separated out into subcategories here). +  * The number of reviewers in HEP (Cosmic): 127 (27 CF) is proportional to the fraction of proposals received.  
-We could use this to normalize HEP data or we could request this data broken down for only CF and include all three years. +  * There were 571 (120 CF) reviews: 4.6 reviews per proposal. 
-  * The number of reviewers in HEP (Cosmic): 127 (27 CF) reviewers, 571 (120 CF) reviewers: 4.6 reviews per proposal. +  CF: Success rate for renewal  (new) was 100% (36%) 
-CF: Success rate for renewal  (new) was 100% (36%) +  All HEP:  Success rates are 85% (24%) for 2014 and 78% (34%) for 2013 
-All HEP:  the rates are 85% (24%) for 2014 and 78% (34%) for 2013 +    New proposals are not doing as well as renewals, and it is worse in the cosmic frontier.  Are these "practice" proposals that will do better the second time around  
-New proposals are not doing as well as renewals.  PracticeOr Real?+  * In general, CF is attracting more new proposals as people switch between frontiers. 
 +  * Anecdotally Energy ⇒ Cosmic.  Can we get numbers for this Do the number of proposals go down in one frontier when the other ones go up? 
 +  * Budget info is lacking.  High success rate at DOE is due to providing lower funding than requested. What is the percent reduction in funding across labs, univ, and new vs old ? 
 +  * Jr Faculty in CF: only 1 funded for 9 reviewed. These are all NEW  (Is this the DE vs Astro dilemma?) At 1% this is much lower than HEP average of Jr faculty:  90% (48%) 
 +  * Research Scientists: are complicated by fractions on many different proposals. For CF: 78%  (by task?) and no new ones. 
  
-In general, CF is attracting more new proposals as people switch between frontiers. +=== Early Career ===  
-Anecdotally Energy ⇒ Cosmic.  Can we get numbers for this?  Do the number of proposals go down in one frontier when the other ones go up? +  2-step procedure: written review (specialized??) down-selects first, then to review panel for only top third.   How is this working? 
-Budget info is lacking.  High success rate is due to lower funding than requested. What is the percent reduction in funding across labs, univ, and new vs old ? +  All frontiers in same panel. Lab/Univ by same panel.  Comment on how well this works. 
-Jr Faculty in CF: only 1 funded for 9 reviewed. These are all NEW  (Is this the DE vs Astro dilemma?) At 1% this is much lower than HEP average of Jr faculty:  90% (48%) +  5% success rate – encouraged to also apply to comp. review (but that is BEFORE career – confusing) 
-Research Scientists: are complicated by fractions on many different proposals.  +  Higher funding level for labs, but roughly the same number in both Lab and Univ
-For CF: 78%  (by task?) and no new ones.  +  Generally even across frontiers (except theory is almost x2, mostly from small University grants
-Early Career +  Early Career used to be a gateway to becoming a PI on a grant.  However, this must not be the case now that it is a 5% success rate.  How has this impacted young researchers?  How can we find data to understand this? 
-2-step procedure: written review (specialized??) down-selects first, then to review panel for only top third.   How is this working? +  The Career review is AFTER the comparative review 
-All frontiers in same panel. Lab/Univ by same panel.  Comment on how this works. +    1. makes it difficult to judge a comparative review without knowing outcome.  
-5% success rate – encouraged to also apply to comp. review (but that is BEFORE career – confusing) +    * 2. Doubles the number of proposals – they are encouraged to do both. 
-Higher funding level for labs, roughly same number L vs U+    3. If Career came first, then if it didn’t get funded, the PI could submit to comparative review .   
-Generally even across frontiers (except theory is almost x2, mostly U+
-Early Career used to be a gateway to becoming a PI on a grant.  However, this must not be the case now that it is a 5% success rate.  How has this impacted young researchers?  How can we find data to understand this? +
-The Career review is AFTER the comparative review +
- 1. makes it difficult to judge a comparative review without knowing outcome. 2. Doubles the number of proposals – they are encouraged to do both. +
- 3. If Career came first, then if it didn’t get funded, the PI could submit      to comparative review .   +
  
aaac/doe_cosmic_frontier/summary2014.1418944604.txt.gz · Last modified: 2014/12/18 17:16 by prisca