Go to the U of M home page
School of Physics & Astronomy
Assay and Acquisition of Radiopure Materials

User Tools


aaac:jul9

This is an old revision of the document!


Agenda July 9, 2015

White Paper

  • Newest draft version docx with tracking and clean pdf
  • Comments (Prisca)
    1. Need help from Ted interpreting his data: e.g. what is the average range (2 numbers quoted). Can we use an average number? There is some mushiness in time cost and paper cost distinctions. Define opportunity cost.
    2. Do we have a way to get the Matthew effect for 35%? More details on the aspirational 35% number. (or should it be 30%?)
  • Comments from Todd

1. The write-up seems very astronomy/astrophysics centric. Adding references to planetary and heliophysics in a few places would be useful. I think the analysis and conclusions are the same for all of our disciplines. Stating that fact and explicitly acknowledging that in order to be specific most of the analysis presented in the report is based on information from astronomy would be sufficient.

2. The 20% figure we identify as a minimum success rate seems to be a fairly subjective judgment (I don't say arbitrary). While we make a reasonable case that this is an uncomfortable but liveable floor, it is not clear why 20% is any different than 22% or 18%. While 20% is better than 15%, I don't think it is sustainable. I'd argue that 25% is a better lower bound (with the community able to withstand a year of lower rates) and that 30% is healthy.

3. There are additional costs associated with low success rates beyond inefficient use of proposer's time. I think this doesn't come across as clearly as it should in the write up. We've talked about these in our earlier discussions. Maybe I'm missing something about the purpose of this document. Anyway here's a cut at a list: a. This is inefficient for proposers because they spend a lot of time and creative energy writing proposals, even e/vg ones, that are not funded. We've covered this aspect pretty well in the document.

b. The increased number of proposals is a burden on the reviewers. Not simply because there are most proposals, but because the process seems inefficient when so few top-ranked proposals are selected.

c. The administrative burden on the agencies is high. Not just because of staff time, but also the logistics and cost of running a large number of panels. Selected only a small handful of proposals from a panel is not efficient use of resources because the cost is large.

d. Faith in the process is lost or eroded when top-ranked proposals are not selected. If one's E/VG proposal is not selected once or twice then the process seems arbitrary and confidence in the peer review system declines.

e. People leave the field. This may affect less senior people disproportionately, but has been seen at all levels in heliophysics at least. The effect may be different for individual researchers, soft money people, and groups of larger size.

f. When the programs shrink beyond a certain point, parts of the program are lost. When this is done by the selection/non-selection of individual proposals, the long-term impacts on the discipline can be haphazard. Programmatically it may be better to deliberately place limits on particular avenues of research, rather than relying only on the outcome of panel reviews to determine which research areas are lost. 4. The conclusions about 'rebalancing' the program seem outside our scope. We haven't considered the impacts of further cuts to observatories and missions. What we have done quite well is characterize a problem and many of its impacts that must be addressed. I think our conclusion should be that more resources need to be allocated to competed research programs.

5. We should differentiate two kinds of approach to the problem - strategic and tactical. Tactical actions might include pre-proposals, limiting opportunities, limiting proposers, grant size and duration, etc. - many of the things we've discussed to deal with the immediate problem. However, the strategic goal is to ensure that the competed research program is of adequate size and scope to support the community in a way that allows us to accomplish the goals of the decadal surveys in an efficient and cost-effective manner.

6. Perhaps this is a quibble. I don't believe that many people simply resubmit the same proposal over and over again. If something isn't selected once, then additional work goes into it in order to improve the proposal for repeat submission - particularly when useful guidance is provided in the review process. If something fails twice, I'd be very reluctant to try a third time with the same idea. Perhaps this gets at the apparent implicit assumption that E/VG proposals are selected randomly and thus submitting multiple times give better odds of selection. See 3.c. above.

7. The success-rate figure is good. I'd like to see an addition that shows the number of proposals selected each year. Or alternatively, the number of proposals submitted. Note that this should be the number of proposal BEFORE any mitigation (like two-step proposals).

8. Figure 2 the second panel is misleading. Using 1000 as the baseline the graph suggests relentless increase in number of unique proposers that looks very dramatic. In fact going from 1025 to 1160 is only a 13% jump.

9. Paragraph 2 of Section 1.3 describes a whopping increase in the number of PIs in the AAS from 1990 (why this date and not something more recent?) to the present - from 200 (7% of 3000) to nearly 600 (13% of 4500). The number has tripled and the fraction has doubled. I don't understand the purpose of this paragraph.

Submit a new survey

Drill down and fill in the gaps on Agency Statistics

aaac/jul9.1436456610.txt.gz · Last modified: 2015/07/09 10:43 by prisca