I have got the daily data (366 netCDF files).
I want to extract chlorophyll a from multiple netCDF files in each day (365 days) with multiple stations (about 3627 stations which prepared the area of interest in CSV file)
I have seen several tutorials about combining and extracting netCDF files and in all of the tutorials, netCDF files contain "time" coordinated with lat and long. But my netCDF files do not have time!
One file is one day's value. Chlorophyll a, lat, long, and palette are all contained within a single file (e.g. dict_keys(['chlor_a', 'lat', 'lon', 'palette']).
I have applied some code in python from the tutorials.
I tried to apply the file name to be the date variable but it did not work!!
This is file name example : A2020001.L3m_DAY_CHL.x_chlor_a.nc. at 1st January 2020.
I have got an empty csv table when I exported it.
Has anyone done it? Please help to advise me about python code.
I am looking forward to hearing some suggestions!
Thanks in advance
- User Services
- Posts: 325
- Joined: Mon Jun 22, 2020 5:24 pm America/New_York
- Has thanked: 7 times
- Been thanked: 3 times
If you have, could you kindly let me know if you have any tutorial links or documentation on how to combine and extract? It would be extremely helpful.
Thanks in advance!
- Subject Matter Expert
- Posts: 266
- Joined: Mon Apr 07, 2008 4:40 pm America/New_York
I don't know if 366 files will cause a memory issue in this tool, but if it does you can do it in smaller chunks of files and then combine the resulting text files of the extracted values.
The command line version of this Pixel Extraction tool is a gpt tool called PixEx. However the command line gpt tools are not yet operational as of SeaDAS 8.1.0. This same tool is in the software SNAP. SeaDAS and SNAP share a lot of the same underlying code and tools. So if you wish to do this at the command line then you will want to use SNAP for the moment until SeaDAS supports this feature at the command line.
- Screen Shot 2021-12-21 at 9.58.48 AM.png (145.04 KiB) Not viewed yet
SeaDAS focuses on the specific needs of ocean color processing. Once you have level-3 mapped files in NetCDF4-CF format there are tools that support multiple disciplines beyond just ocean color. I have used NCO <https://github.com/nco/nco> and CDO <https://mpimet.mpg.de/cdo>. Both are often available in linux distros or macports. They are command-line programs suitable for large scale batch processing, but the learning curve is long and steep.
For CDO you add times to the individual files and then use "mergetime" to combine them into one or more (depending on your system capacity) larger files.
I have tried to extract in R already. It is solved now!!
I followed the instructions, which stated that I should connect to the server first.
But the procedure was too difficult for me, so I went my own way.
It's weird and appears to be a manual process.
Actually, I have applied R code from my friend.
For my data, However, the date and time variables were absent from my netCDF file.
To begin, I must create 366 file names from January 1 to December 31 of the same year.
Actually, after extracting, I wanted to utilize the filename as a single variable name, but I didn't know how.
Because when I download the entire year's amount of netCDF files, some days are missing.
I addressed the problem by creating a new netCDF file in Python and replacing the missing file with this.
##############Python code for making empty netCDF file##############
import netCDF4 as nc
import numpy as np
import xarray as xr
ncfile = nc.Dataset("A2020001.L3m_DAY_CHL.x_chlor_a.nc",'r+')
lu_index = ncfile.variables['chlor_a'][:]
lu_index = 0
ncfile.variables['chlor_a'][:] = lu_index
chl_a1 = ncfile.variables['chlor_a'][:]
##################### R code for extracting data ###################
# Read chl-a data
raster_stack = list.files('C:/Users/nanae/Documents/R/R_extract/',pattern='*.nc',full.names=TRUE) #
chl_aqa <- raster::stack(raster_stack, varname = 'chlor_a')
names(chl_aqa)<-seq(as.Date("2020-01-01"), by = "day", length.out = 366) #create each file name (start date is 2020-01-01, and end date is 2020-12-31)
#Import station of interest using csv
station_data<-read.csv("stations.csv",header = TRUE)
station_data<- SpatialPoints(coords=station_data, proj4string = CRS("+proj=longlat +datum=WGS84"))
write.csv(extract_value, file ="extract_chlor_a.csv")
I hope this solution could help.
If someone finds an error or maybe something that can be improved, kindly advise, please.
Thanks for the additional detail. I think pixel extraction should work in the SeaDAS GUI, but a laptop may not have the capacity (mainly RAM) to process many files at once. Linux excels at running the same program repeatedly to apply the same process to a list of files.