The planning for this workshop started late last year, but it now seems to have an increased awareness surrounding it. With the President's proposed budget massively cutting research spending, and an apparent public distrust of science research, it seems like a prime time for scientists to take some time for self reflection. How are we doing as a field? Are we making progress towards a better understanding of the world around us? Where could we improve the pipeline from research understanding to operations and technology, ultimately to benefiting society?
We do ask these questions fairly regularly, both as an individual and as a community. When we write papers we have to put our research and the new results in context of previous work and understanding. When writing proposals we must state how this proposed research builds off of previous understandings and might bring light to a question which has yet to be cracked. When refereeing those papers or grant proposals, we must rank the author/proposer's assessment of where their work stands. We ask these questions when we propose conference sessions, and after conference sessions at diner with colleagues or later in the evening over drinks at the bar. However, those conversations don't often make it down into writing. Granted, we're also not perhaps looking at the whole pipeline; from research to its impact on society.
Once a year the Living With a Star (LWS) steering committee determines, with the help of the community, what areas are ripe for research and suggests grant calls with a specific importance on research that will ultimately benefit society. A couple times a decade we take a slightly different, but a bit more far afield look at the entire field. These decadal and semi-decadal surveys try to summarize what big steps were made in the last 5 - 10 years and propose what are the most pressing research areas we as a field should pursue in the coming 5 - 10 years. The research in the decadal survey may not be called out to directly benefit society, but it all does progress our understanding of the world we live in. As it often takes more than 10 years to go from proposing a satellite mission to the launch and then many more years for the data to be analyzed, 10 years isn't necessarily a long term view. It may also take decades before we can determine what new theories and ideas are more correct than others. For instance, it was almost 20 years between the time when CRRES examined the radiation belts and Van Allen Probes were launched to revisit this location. As there are currently no new radiation belt mission planed, it may be another 20+ years before we can return again. Science is a slow process.
Science may be a slow process, but we can still see how we're advancing which is really kind of cool! We see it in our ability to provide better space weather forecasts with more accurate models. We see it in the evolution of our theories. We see it in the new discoveries we find when we send new sensitive instruments into space. But a lot of that is qualitative. Can we show our progress more quantitatively? We think so, and that is one of our major goals for the workshop in Florida. This is all still a work in progress, so if you have comments/suggestions/would like to join the group, please let us know! Below I've outlined a few of the different approaches we're hoping to take.
Application Readiness Levels:
The first way we have proposed to track our progress is with Application Readiness Levels (ARL). ARLs are very similar to technology readiness levels (TRL) which are used for instrumentation, but instead are applied to a specific application. These were originally developed by the Earth Science Division at NASA (this is one of the divisions which would receive massive cuts from the budget, but that's a whole other post). We hope to use them for Space Physics and Space Weather applications. For instance, one of the applications we hope to track is our ability to predict satellite drag. Although we think of the atmosphere at the altitude of a satellite as almost non-existent, it is still is significant. In fact, with changes in solar intensity and geomagnetic activity the density changes enough to increase the amount of drag on a satellite which leads to the satellite loosing altitude and ultimately de-orbiting unless it is moved back up. Probably the most famous satellites to have this done are the Hubble telescope and the international space station. We have models that try to account for and predict the amount of satellite drag one would experience. With the ARL we can track how well the modeling currently does as well as identify where we may need to improve our understanding, gather more data for inputs to the models, and ultimately refine our forecasts.
Tracking Usage of Data:
Citations are perhaps the standardized testing of science and may not always be the best way to determine an increase in understanding, but it is something that is nice and quantitative. The results from looking at how often a data set is used in published work probably shouldn't be taken alone, but it does often show a trend. The question that we hope this type of examination will answer is how many people use the data from a specific mission. Do only people who were on the mission team use the data, or does the community (both US and international) see it as value added to their studies? The more people who use this data set may imply the usefulness of continuing to collect that type of data.
Tracking Usage of Models/Forecasting/anomaly analysis:
The data for this is perhaps going to be a bit harder to find. However, it would be incredibly useful to be able to track who our end users are, where they get the data/forecasts/research from, and how they use it. Some of this gets into a bit of a sticking point. For example, if a sudden event upset occurs on a spacecraft, it's hard to confirm that that indeed was the problem. We also need the help of end users for this as they are the ones who see when space weather affects them, and how well our forecasts are able to predict and inform them. They can also tell us if what we think is important information for them is indeed useful. We, the scientist, may think that it's important to predict whether a solar storm is going to produce a geomagnetic storm, but the end user may only care to know if the CME is going to hit the magnetosphere when their equipment is up during the day.
Of course there are many other ways we can try to track our field's progress. Over the coming months as our group meets and starts to gather and analyze the data, I'll try to update you all on what we find. And if you have any suggestions, comments, ideas, let us know!