Interactive comment on “ On biases in atmospheric CO inversions assimilating MOPITT satellite retrievals ”

Eq. 1 is not itself used in the MOPITT retrieval algorithm to calculate retrieved CO profiles or total column values, as the manuscript implies; really this equation just describes the expected relationship between the retrieved profile, a priori profile, and true atmospheric state in a ’maximum a posteriori’ retrieval method. Eq. 1 contains a term A’ which should actually be the total column averaging kernel (a vector) and not the averaging kernel matrix (line 144).


Interactive comment
Printer-friendly version Discussion paper 1.The discussion and especially sections 5.3 and 5.4 are scientifically flawed.The authors drive conclusions without experimenting themselves.I strongly recommend the authors to reconsider their data assimilation experiments and setups before driving such conclusions or consider removing those two sections.I fear that without those two sections the paper will significantly loose substance.Moreover, the sensitivity tests on model parameters are not convincing, a significant increase on model horizontal resolution and using a more detailed chemical scheme would have been more useful to point out intrinsic model deficiencies and uncertainties.
2. The quality of the scientific argumentation can be questioned.A lot of references are cited inappropriately.Number of citations do not support statements made in the present paper (see specific comments below).Demonstrations are often approximate and hand-waving.The conditional form is often used when it comes to conclusions (the forms "would" and "could" are widely used).The authors suggest and anticipate from incomplete set of experiments with few references to drive scientific conclusions.
3.Last but not least, I am concerned about the methodology itself; statistical methodology and the significance of the diagnostics.The reliability of the data assimilation algorithm is not discussed as well.
To support the three mentioned points please consider the following specific comments.

Specific comments:
Line 66: There are also other references that are using MOPITT and data assimilation to study the temporal distribution and variability of CO, e.g.Inness et al., 2015, Myazaki et al., 2015, Barré et al., 2015.with MOPITT deteriorates from a 4% negative bias in the a priori to a 10% negative bias in the a posteriori solution, due to an emission decrease suggested by SH surface observations."Line 86: Gaubert et al., 2016 is not inverting surface emissions as Yin et al., 2015.Lines 84-87: This statement is not well supported by either reference provided.For example, Barré et al., 2015 that is assimilating two types of sounders find opposite conclusions.MOPITT assimilation still underestimates CO at the surface over CONUS.This is probably only true in the southern hemisphere.
Lines 87-90: This statement is not clear at all.Please clarify.Lines 316-317: The authors should detail exactly how they apply the observation operator to retrieve Xmod.Have they smoothed the model profile by the averaging kernel, have they considered interpolating partial columns from the model and then convert to log(vmr) to match the MOPITT data?The authors should refer to Barré et al., 2015 section 2.2.4 for the correct approach.I am then uncertain if the method used by the author is the correct one, hence I am doubtful about the validity of the results and discussion about the MOPITT profiles validation in rest of the paper.
Lines 323-324: Does this mean that you are taking the nearest grid point.If yes, is that appropriate?Since you are doing DA science you should be able to interpolate at the right location.
Lines 325-328: This is unclear to me, what operation the authors are doing here.Are you shifting or scaling the profile in order to keep the same total column value?What is the "uncertainty from vertical resolution change on the CTM"?Please rewrite, develop, explain better.
Lines 330-333: It is unclear to me what the authors are doing exactly.Are they averaging monthly model values and then they are comparing with monthly averaged observations?If yes, the entire results of this paper would be flawed.Or are they interpolating model to observation at the right time.Moreover, it seems that the correlations in the rest of the paper are made on monthly averaged biases, reducing the sample for correlation to something small and probably not statistically significant.Looking at the correlations plots I see around 12 to 14 point as a sample size.Would it be statistically more sound to calculate those correlations using the entire sample of observations (not reduced by average biases)?I am then doubtful about the robustness of this score during the further analysis of this paper.Lines 649-652: The syntax of this sentence is not correct.
Lines 683-695 and section 5.3 in general: The conclusion of "positive biases in the MOPITT retrievals" is flawed here.The authors utilize only one inversion technique from only using total column product.They infer only the surface emissions that is not a direct measured quantity from MOPITT CO retrievals.Depending of a model quality (i.e.resolution, chemistry, horizontal and vertical transport, and so on. ..) inverting the emissions only can lead to good result for the wrong reasons and conversely often having the "correct" emissions and having significant errors in the atmosphere.Data assimilation rely on observation but ALSO on models, you could have the best observation quality, if the model is inaccurate the analysis and the subsequent forecasts would be degraded.Before jumping quick in such important conclusions several things should be tested carefully such as: Assimilating the CO fields directly with total CO columns and CO profiles Rerunning the current experiments with a more complete and detailed chemistry Line 75: Please, change plagued by another word.Models are not plagued, they just misrepresent the truth by man-made simplifications.Line 83-84: This is not what Hooghiemstra et al., 2012 are proving.Form the conclusion of the paper it is: "However, in the remote SH (30 -60Â ȃ S), the comparison Line 88: differences between what and what?Line 90: While the statement is unclear to me, I do not think this is the Jiang et al., 2015 conclusions.Line 135: The authors should know what Bayesian means.There is nothing Bayesian in this equation.Line 140: Clarify the statement, it sounds as you model a profile from measurements.Line 176: 2.5 by 3.75 degrees is now considered as low resolution, change accordingly.Line 179: is it another model?I believe you still use LMDz but with a different configuration.Change accordingly.Line 181: Does changing just the latitudinal resolution from 2.5 to 1.89 degrees relevant?It is then mainly a significant increase on the vertical resolution.Why not keeping the same horizontal resolution?Again 1.89 by 3.75 by 39 levels is not considered nowadays as high resolution.Lines 199-200: 2009 to 2011?From what month to what month?It could be almost three years to almost one year though.

Line 358 :
Please recall what are those big-regions.Cite Yin et al., 2015.Line 370-376: This entire paragraph is confusing to me, please rewrite.643: What is the purpose of this paragraph?It is not clear what the authors are trying to demonstrate.Please clarify, develop, rephrase.
Line 692-695: Deeter et al., 2014 made the direct comparisons between MOPITT V6T (which is the same as MOPITT V6J over the ocean) and HIPPO measurements: providing a quantification of the MOPITT biases: 1.5% 7.7% at the 200hPa level.How can the authors can explain such discrepancies with those results and figure 4 and 7.The authors compare figure 2 and 4 with figure 7. Again, the representativity of the statistics made here should be considered.HIPPO and MOZAIC cover specific regions whereas MOPITT provide a global picture.Is it reasonable to compare those figures in order to drive conclusions about biases without quantification?Lines 702-707: This indicate an issue in your CO lifetime (see comments above).I would suggest having an estimate and quantification of your CO lifetime and budgets (e.g.like in Gaubert et al., 2016).This will help you investigating and quantifying what