Campuses:
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
aaac:jul9 [2015/07/09 10:43] – prisca | aaac:jul9 [2015/07/09 14:47] (current) – prisca | ||
---|---|---|---|
Line 3: | Line 3: | ||
=== White Paper === | === White Paper === | ||
* Newest draft version {{: | * Newest draft version {{: | ||
- | * Comments (Prisca) | + | * Need help from Ted interpreting his data. Can we ask for a bit more detail and report page references. What does quoting |
- | - Need help from Ted interpreting his data: e.g. what is the average range (2 numbers | + | * Comments from Todd {{: |
- | - Do we have a way to get the Matthew effect for 35%? More details on the aspirational 35% number. (or should it be 30%?) | + | * Comments from Paul {{: |
- | * Comments from Todd | + | |
- | * | + | |
- | 1. The write-up seems very astronomy/ | + | |
- | 2. The 20% figure we identify as a minimum success rate seems to be a fairly subjective judgment (I don't say arbitrary). While we make a reasonable case that this is an uncomfortable but liveable floor, it is not clear why 20% is any different than 22% or 18%. While 20% is better than 15%, I don't think it is sustainable. I'd argue that 25% is a better lower bound (with the community able to withstand a year of lower rates) and that 30% is healthy. | + | === Submit |
+ | * What is our schedule? | ||
- | 3. There are additional costs associated with low success rates beyond inefficient use of proposer' | ||
- | a. This is inefficient for proposers because they spend a lot of time and creative energy writing proposals, even e/vg ones, that are not funded. We've covered this aspect pretty well in the document. | ||
- | |||
- | b. The increased number of proposals is a burden on the reviewers. Not simply because there are most proposals, but because the process seems inefficient when so few top-ranked proposals are selected. | ||
- | |||
- | c. The administrative burden on the agencies is high. Not just because of staff time, but also the logistics and cost of running a large number of panels. Selected only a small handful of proposals from a panel is not efficient use of resources because the cost is large. | ||
- | |||
- | d. Faith in the process is lost or eroded when top-ranked proposals are not selected. If one's E/VG proposal is not selected once or twice then the process seems arbitrary and confidence in the peer review system declines. | ||
- | |||
- | e. People leave the field. This may affect less senior people disproportionately, | ||
- | |||
- | f. When the programs shrink beyond a certain point, parts of the program are lost. When this is done by the selection/ | ||
- | 4. The conclusions about ' | ||
- | |||
- | 5. We should differentiate two kinds of approach to the problem - strategic and tactical. Tactical actions might include pre-proposals, | ||
- | |||
- | 6. Perhaps this is a quibble. I don't believe that many people simply resubmit the same proposal over and over again. If something isn't selected once, then additional work goes into it in order to improve the proposal for repeat submission - particularly when useful guidance is provided in the review process. If something fails twice, I'd be very reluctant to try a third time with the same idea. Perhaps this gets at the apparent implicit assumption that E/VG proposals are selected randomly and thus submitting multiple times give better odds of selection. See 3.c. above. | ||
- | |||
- | 7. The success-rate figure is good. I'd like to see an addition that shows the number of proposals selected each year. Or alternatively, | ||
- | |||
- | 8. Figure 2 the second panel is misleading. Using 1000 as the baseline the graph suggests relentless increase in number of unique proposers that looks very dramatic. In fact going from 1025 to 1160 is only a 13% jump. | ||
- | |||
- | 9. Paragraph 2 of Section 1.3 describes a whopping increase in the number of PIs in the AAS from 1990 (why this date and not something more recent?) to the present - from 200 (7% of 3000) to nearly 600 (13% of 4500). The number has tripled and the fraction has doubled. I don't understand the purpose of this paragraph. | ||
- | |||
- | |||
- | |||
- | === Submit a new survey === | ||
=== Drill down and fill in the gaps on Agency Statistics === | === Drill down and fill in the gaps on Agency Statistics === | ||
+ | * Michael Cooke added to the [[aaac: | ||
+ | * Prisca and Michael will continue to work together on pulling together the relevant data. I remind you that DOE provides a counter example to success rates - see spreadsheet at that link. Other Issues are | ||
+ | * demographic data (gender, race, age) is not requested (no database). | ||
+ | * data exists on whether it is a “new” proposals to the HEP program vs “renewal” | ||
+ | * successful awards have public information on the institution, | ||
+ | * they do NOT have number of PIs on a grant, total funding requested in the original proposal, breakdown of funding by frontier. | ||
+ | * Limited in how far back you can go: HEP began relying on the comparative review process for proposals submitted to the FY 2012 funding cycle. Some data exist from before 2012 but not as detailed and there are concerns about accuracy. | ||
+ | * Agency impact: The comparative review is an improvement over the previous mail-in-reviews only process. The outcomes that we viewed were fair. (comes from the COV) | ||
+ | * Agency impact: successful at getting reviewers, particularly new reviewers: 153 reviewers participated in the FY 2015 comparative review process, in which 687 reviews were completed with an average 4.9 reviews per proposal. | ||
+ | * NASA Linda, Hasan, Daniel are willing to help, but not a lot of time. What is the best use of their time? Can we get better/more merit data to fill in the gaps in years and for Astro Helio and Planetary separately? | ||
+ | * Helio and Planetary - need a point person who will consider what data already exists (see our long report) and what else we need. This data provides information on pre-proposal models - we need the latest data to update what we have. | ||
+ | * NSF is short-handed for this work, but can be tapped to mine the data if we have a very specific question to ask. I would suggest number of unique proposers per 2 years, 4 years, 5 years to complement already existing 1 year and 3 year. Can we fit for the number of repeat proposals? Can we get this data for other agencies? | ||
+ | * Put your ideas into [[aaac: |