This isn't generative AI. This is predictive AI. Even using some of the same methods, nothing is stolen, and all the results are heavily speculative and handled by people who (should) know the results are heavily speculative.
Meteorology is, by its nature, speculative. It's why we still talk about "Chance of rain" even with all of our advancements. This is just a different method of speculation that is automating a lot of the data/stat extrapolation we've been doing manually or with slower algorithms since the 60s.
Like look at the announcement: https://www.noaa.gov/news-release/noaa-deploys-new-generation-of-ai-driven-global-weather-models
"Marked improvement over traditional methods". They're testing this stuff against former methods and it's winning. We're not talking "Right 48% of the time" like Excel's AI trash.
If you wanna bat for "Okay but a human should check this stuff over first" yeah okay fine. I don't even disagree, but 98% of this artform is a lot of number crunching and deriving trends from a cloud of chaotic data. This is the absolute ideal use-case for this stuff.
@trysdyn gotta say, as much as I am anti-AI-hype: *yes*, this is *exactly* the kind of stuff "AI" is useful and helpful for. and yes, there are still humans involved, looking at the results and deciding whether they seem accurate or not--I know this because I regularly watch the NOAA "forecast discussion page" for my area, in which the weather scientists discuss and dissect the products from *three different programs* that each tend to come up with slightly different (sometimes, wildly different) results, and use their historical judgement (and knowledge of the foibles of each program) to decide what to send to the news sites, in a kind of "Minority Report" situation. this is adding another program to the mix, giving more and different data, which the meteorologists will consider to allow or ignore into their predictions.