This seems to be a simple task, but I haven’t yet found an easy and efficient way to do it.
I’m working on a python script where I can export daily (or even monthly) netcdf L3 files which I can later use for data analysis.
Let’s say I have 27 netcdf files for 2 days HCHO global coverage, how can I export 2 netcd files with the daily regridded data?
My current approach is to group the input files using wildcards, then I loop over the list with something like the following code. But this doesn’t seem to be the most efficient way.
product_path = '.../.../hcho_L2' # level2 product path
export_path = '.../.../hcho_L3' # path for the created level3 files
input_files_OFFL = sorted(list(iglob(join(product_path, '**','*OFFL*HCHO*.nc'), recursive=True))) # sorting level2 files into a list
operations = ";".join([
"tropospheric_HCHO_column_number_density_validity>50",
"keep(latitude_bounds,longitude_bounds,datetime_start,datetime_length,tropospheric_HCHO_column_number_density)",
"exclude(datetime_length)",
"bin_spatial(1801,-90,0.1,3601,-180,0.1)",
"derive(tropospheric_HCHO_column_number_density [Pmolec/cm2])",
"derive(latitude {latitude})",
"derive(longitude {longitude})",
"count>0"
])
reduce_operations=";".join([
"squash(time, (latitude, longitude, latitude_bounds, longitude_bounds))",
"bin()"
for i in input_files_OFFL:
try:
harp_L2_L3 =harp.import_product(i, operations, reduce_operations=reduce_operations)
export_folder = "{export_path}/{name}".format(export_path=export_path, name=i.split('/')[-1].replace('L2', 'L3')) #renaming the new level3 files
harp.export_product(harp_L2_L3, export_folder, file_format='netcdf')
except harp.CLibraryError as e: # skipping corrupted files when generating the new list of level3 products
print(e)
pass
With the above code snippet I am able to convert every single L2 file to L3 file… but I wish I could export daily files rather then single retrieval L3 files. Can this be done with harpmerge?
Thanks!