Campuses:
This is an old revision of the document!
c. The administrative burden on the agencies is high. Not just because of staff time, but also the logistics and cost of running a large number of panels. Selected only a small handful of proposals from a panel is not efficient use of resources because the cost is large.
d. Faith in the process is lost or eroded when top-ranked proposals are not selected. If one's E/VG proposal is not selected once or twice then the process seems arbitrary and confidence in the peer review system declines.
e. People leave the field. This may affect less senior people disproportionately, but has been seen at all levels in heliophysics at least. The effect may be different for individual researchers, soft money people, and groups of larger size.
f. When the programs shrink beyond a certain point, parts of the program are lost. When this is done by the selection/non-selection of individual proposals, the long-term impacts on the discipline can be haphazard. Programmatically it may be better to deliberately place limits on particular avenues of research, rather than relying only on the outcome of panel reviews to determine which research areas are lost. 4. The conclusions about 'rebalancing' the program seem outside our scope. We haven't considered the impacts of further cuts to observatories and missions. What we have done quite well is characterize a problem and many of its impacts that must be addressed. I think our conclusion should be that more resources need to be allocated to competed research programs.
5. We should differentiate two kinds of approach to the problem - strategic and tactical. Tactical actions might include pre-proposals, limiting opportunities, limiting proposers, grant size and duration, etc. - many of the things we've discussed to deal with the immediate problem. However, the strategic goal is to ensure that the competed research program is of adequate size and scope to support the community in a way that allows us to accomplish the goals of the decadal surveys in an efficient and cost-effective manner.
6. Perhaps this is a quibble. I don't believe that many people simply resubmit the same proposal over and over again. If something isn't selected once, then additional work goes into it in order to improve the proposal for repeat submission - particularly when useful guidance is provided in the review process. If something fails twice, I'd be very reluctant to try a third time with the same idea. Perhaps this gets at the apparent implicit assumption that E/VG proposals are selected randomly and thus submitting multiple times give better odds of selection. See 3.c. above.
7. The success-rate figure is good. I'd like to see an addition that shows the number of proposals selected each year. Or alternatively, the number of proposals submitted. Note that this should be the number of proposal BEFORE any mitigation (like two-step proposals).
8. Figure 2 the second panel is misleading. Using 1000 as the baseline the graph suggests relentless increase in number of unique proposers that looks very dramatic. In fact going from 1025 to 1160 is only a 13% jump.
9. Paragraph 2 of Section 1.3 describes a whopping increase in the number of PIs in the AAS from 1990 (why this date and not something more recent?) to the present - from 200 (7% of 3000) to nearly 600 (13% of 4500). The number has tripled and the fraction has doubled. I don't understand the purpose of this paragraph.