Hello,
I have been using the bellow code to analyse sentinel-5p NO2 data:
product = harp.import_product(“path\month\*.nc”,
operations=“tropospheric_NO2_column_number_density_validity>75;keep(latitude_bounds,longitude_bounds,tropospheric_NO2_column_number_density,surface_zonal_wind_velocity,surface_meridional_wind_velocity);bin_spatial(1801,-90,0.1,3601,-180,0.1);derive(tropospheric_NO2_column_number_density [Pmolec/cm2])”,
post_operations= “bin();squash(time, (latitude_bounds,longitude_bounds));derive(latitude {latitude});derive(longitude {longitude});exclude(latitude_bounds,longitude_bounds,latitude_bounds_weight,longitude_bounds_weight,count,weight)”)
I am only interested in the Mediterranean Sea region. Even though the files in my directory are only relevant for the area of interest I am still processing a lot of unnecessary data. This results into problems when trying to compute monthly plots with around 90 netcdf files. At the moment I can run around 30 files at a time otherwise my Python crash. It is a computational limitation I have to live with.
Is there a way to subset the data with HARP using specific lot/lan values so that I can process just the data relevant for me?
Thanks in advance for your help.