Welcome to the Earthdata Forum! Here, the scientific user community and subject matter experts from NASA Distributed Active Archive Centers (DAACs), and other contributors, discuss research needs, data, and data applications.
So for this subscription, you would use something like this to retrieve the list of files that have matched for the subscription. Each entry returned will have the URL required to download the file. So there is no need to try to predict any file names. For your other subscriptions, just change the subID argument in the URL below.
If you had a shell script that looped through each of your subscriptions, the cron job can just call it periodically, and any new files for each subscription would be downloaded. You probably want to keep track of the files that you've already downloaded so you don't waste time/bandwidth doing them again. The --no-clobber option might help as well.
by ermill99 » Fri Feb 14, 2020 6:32 pm America/New_York
John, I was able to successfully call the file with the links. I'm trying to automate the daily updated png files and am wondering if there a way to just call the .png files? Maybe I could rename the file with WGET? I am still trying to figure out how to use the URLs in my HTML but the URL will still change. I've got some thinking to do on this.
I'm not sure i understand why there is a need to try to predict part of the file name when the file_search URL gives you the actual names of the files for the subscription. Whether you get the names from the file_search or got lucky and predicted them, you would need to use the same URL to download them.