Go to the U of M home page
School of Physics & Astronomy
Assay and Acquisition of Radiopure Materials

User Tools


aaac:jul9

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
aaac:jul9 [2015/07/09 10:44] priscaaaac:jul9 [2015/07/09 14:47] (current) prisca
Line 3: Line 3:
 === White Paper  === === White Paper  ===
   * Newest draft version {{:aaac:proposal_success_rates_aaac_201507_03.docx|docx with tracking}} and {{:aaac:proposal_success_rates_aaac_201507_03a.pdf|clean pdf}}   * Newest draft version {{:aaac:proposal_success_rates_aaac_201507_03.docx|docx with tracking}} and {{:aaac:proposal_success_rates_aaac_201507_03a.pdf|clean pdf}}
-  * Comments (Prisca) +  * Need help from Ted interpreting his data.  Can we ask for a bit more detail and report page references What does quoting 2 numbers mean? Can we use one average number?  There is some mushiness in time cost and paper cost distinctions.  Define opportunity cost.  Do we have a way to get the Matthew effect for 35%?  More details on the aspirational 35% number. (or should it be 30%?) 
-    - Need help from Ted interpreting his data: e.gwhat is the average range (2 numbers quoted). Can we use an average number?  There is some mushiness in time cost and paper cost distinctions.  Define opportunity cost. +  * Comments from Todd {{:aaac:todd_comments.pdf|pdf of email}} 
-    - Do we have a way to get the Matthew effect for 35%?  More details on the aspirational 35% number. (or should it be 30%?) +  Comments from Paul {{:aaac:paulhertzcomments.pdf|pdf of email}} 
-  * Comments from Todd +  
-    1The write-up seems very astronomy/astrophysics centric. Adding references to planetary and heliophysics in a few places would be useful. I think the analysis and conclusions are the same for all of our disciplines. Stating that fact and explicitly acknowledging that in order to be specific most of the analysis presented in the report is based on information from astronomy would be sufficient.+
  
-    2. The 20% figure we identify as a minimum success rate seems to be a fairly subjective judgment (I don't say arbitrary). While we make a reasonable case that this is an uncomfortable but liveable floor, it is not clear why 20% is any different than 22% or 18%. While 20% is better than 15%, I don't think it is sustainable. I'd argue that 25% is a better lower bound (with the community able to withstand a year of lower rates) and that 30% is healthy.+=== Submit a new survey === 
 +  What is our schedule?  Has anything further been done?  Add your ideas to  [[aaac:newsurvey|Sample questions that go beyond the Von Hippel Survey]]
  
-    * 3. There are additional costs associated with low success rates beyond inefficient use of proposer's time. I think this doesn't come across as clearly as it should in the write up. We've talked about these in our earlier discussions. Maybe I'm missing something about the purpose of this document. Anyway here's a cut at a list: 
-      * a. This is inefficient for proposers because they spend a lot of time and creative energy writing proposals, even e/vg ones, that are not funded. We've covered this aspect pretty well in the document. 
- 
-      *b. The increased number of proposals is a burden on the reviewers. Not simply because there are most proposals, but because the process seems inefficient when so few top-ranked proposals are selected. 
- 
-c. The administrative burden on the agencies is high. Not just because of staff time, but also the logistics and cost of running a large number of panels. Selected only a small handful of proposals from a panel is not efficient use of resources because the cost is large. 
- 
-d. Faith in the process is lost or eroded when top-ranked proposals are not selected. If one's E/VG proposal is not selected once or twice then the process seems arbitrary and confidence in the peer review system declines. 
- 
-e. People leave the field. This may affect less senior people disproportionately, but has been seen at all levels in heliophysics at least. The effect may be different for individual researchers, soft money people, and groups of larger size. 
- 
-f. When the programs shrink beyond a certain point, parts of the program are lost. When this is done by the selection/non-selection of individual proposals, the long-term impacts on the discipline can be haphazard. Programmatically it may be better to deliberately place limits on particular avenues of research, rather than relying only on the outcome of panel reviews to determine which research areas are lost. 
-4. The conclusions about 'rebalancing' the program seem outside our scope. We haven't considered the impacts of further cuts to observatories and missions. What we have done quite well is characterize a problem and many of its impacts that must be addressed. I think our conclusion should be that more resources need to be allocated to competed research programs. 
- 
-5. We should differentiate two kinds of approach to the problem - strategic and tactical. Tactical actions might include pre-proposals, limiting opportunities, limiting proposers, grant size and duration, etc. - many of the things we've discussed to deal with the immediate problem. However, the strategic goal is to ensure that the competed research program is of adequate size and scope to support the community in a way that allows us to accomplish the goals of the decadal surveys in an efficient and cost-effective manner. 
- 
-6. Perhaps this is a quibble. I don't believe that many people simply resubmit the same proposal over and over again. If something isn't selected once, then additional work goes into it in order to improve the proposal for repeat submission - particularly when useful guidance is provided in the review process. If something fails twice, I'd be very reluctant to try a third time with the same idea. Perhaps this gets at the apparent implicit assumption that E/VG proposals are selected randomly and thus submitting multiple times give better odds of selection. See 3.c. above. 
- 
-7. The success-rate figure is good. I'd like to see an addition that shows the number of proposals selected each year. Or alternatively, the number of proposals submitted. Note that this should be the number of proposal BEFORE any mitigation (like two-step proposals). 
- 
-8. Figure 2 the second panel is misleading. Using 1000 as the baseline the graph suggests relentless increase in number of unique proposers that looks very dramatic. In fact going from 1025 to 1160 is only a 13% jump. 
- 
-9. Paragraph 2 of Section 1.3 describes a whopping increase in the number of PIs in the AAS from 1990 (why this date and not something more recent?) to the present - from 200 (7% of 3000) to nearly 600 (13% of 4500). The number has tripled and the fraction has doubled. I don't understand the purpose of this paragraph. 
- 
-  
- 
-=== Submit a new survey === 
 === Drill down and fill in the gaps on Agency Statistics === === Drill down and fill in the gaps on Agency Statistics ===
 +  * Michael Cooke added to the [[aaac:doe_cosmic_frontier:resources|DOE Cosmic Frontier Resource Page]]  
 +  * Prisca and Michael will continue to work together on pulling together the relevant data.  I remind you that DOE provides a counter example to success rates - see spreadsheet at that link.  Other Issues are  
 +    * demographic data (gender, race, age) is not requested (no database).  It might be inferred from the comparative review notes.  Early Career rewards require < 10 yrs from PhD 
 +    * data exists on whether it is a “new” proposals to the HEP program vs “renewal”  to the HEP program. A PI moving between research thrusts (aka “frontiers”) would be considered a “renewal” in this context.  Is there data on someone who puts in a resubmission of the same proposal the next year when it was rejected the first time? 
 +    * successful awards have public information on the institution, the PI, and the total amount of the award given by HEP.  
 +    * they do NOT have number of PIs on a grant, total funding requested in the original proposal, breakdown of funding by frontier.  DOE is considering how to capture that.   
 +    * Limited in how far back you can go: HEP began relying on the comparative review process for proposals submitted to the FY 2012 funding cycle. Some data exist from before 2012 but not as detailed and there are concerns about accuracy. 
 +    * Agency impact: The comparative review is an improvement over the previous mail-in-reviews only process. The outcomes that we viewed were fair.  (comes from the COV) 
 +    * Agency impact: successful at getting reviewers, particularly new reviewers: 153 reviewers participated in the FY 2015 comparative review process, in which 687 reviews were completed with an average 4.9 reviews per proposal. 
 +  * NASA  Linda, Hasan, Daniel are willing to help, but not a lot of time.  What is the best use of their time?  Can we get better/more merit data to fill in the gaps in years and for Astro Helio and Planetary separately?   
 +  * Helio and Planetary - need a point person who will consider what data already exists (see our long report) and what else we need.  This data provides information on pre-proposal models - we need the latest data to update what we have.  
 +  * NSF is short-handed for this work, but   can be tapped to mine the data if we have a very specific question to ask.  I would suggest number of unique proposers per 2 years, 4 years, 5 years to complement already existing 1 year and 3 year.  Can we fit for the number of repeat proposals? Can we get this data for other agencies? 
 +  * Put your ideas into [[aaac:agencystats| the Agency stat link.]]
aaac/jul9.1436456680.txt.gz · Last modified: 2015/07/09 10:44 by prisca