try:
# import the entire product in lat/lon but only keep the variables we want
product = harp.import_product(infile,operations = ‘keep(latitude, longitude,
solar_zenith_angle,viewing_zenith_angle, scan_subindex,
datetime,
O3_column_number_density,O3_column_number_density_uncertainty,
cloud_top_albedo, cloud_top_pressure, cloud_fraction,
cloud_optical_depth)’)
# three variables are not in the ingestion definition, so we need to ingest them by hand
pf = coda.open(infile)
product.O3_temp = harp.Variable(coda.fetch(pf, "/DETAILED_RESULTS/O3/O3Temperature"), ["time"])
product.EW_CorrF_O3 = harp.Variable(coda.fetch(pf, "/DETAILED_RESULTS/O3/EastWestPostCorrectionFactorO3"), ["time"])
product.Effective_SCD_O3 = harp.Variable(coda.fetch(pf, "/DETAILED_RESULTS/O3/ESCRingCorrected"), ["time"])
coda.close(pf)
#apply the geospatial filter
try:
filtered_product = harp.execute_operations(product, "point_distance(40.6335,22.9563,150[km])")
except harp.Error:
pass
productlist.append(filtered_product)
except harp.Error:
pass
#Export the product
try:
average = harp.execute_operations(productlist)
harp.export_product(average, outpath+“test_ozone.nc”)
except harp.Error:
pass
Hi Sander,
I am wondering if there is a faster way to perform the operations above. I am ingesting a GOME2 L2 orbital file, adding the variables not in the ingestion definition, applying a spatial filtering and then outputting per day whatever it found. Do you see any way this process might become faster? for e.g. I noticed that in the productlist I get 13 empty harp products and one that actually has data [over SKG for that matter]. Hence it is a list of 14 harp products 13 of which are empty and not needed. [14 orbits per day, only one over SKG]. Per day this is not an issue of course, but what if I run a month? a year? multiple years?! Horror.
Is there a way to “coda.fetch” from within harp.import_product? so that all this can happen in one go?
All the above in python with Harp 1.16
Many thanks as ever,
MariLiza