(#fxa4bpa) @New_scientist Silicon Valley’s top AI models are terrible at almost everything. They only seem otherwise because people are easily fooled into believing they have capabilities they don't have.
matched #pqlawsq score:11.51
Search by:
Search by 1 mentions:
(#cikr4oa) @New_scientist no it can't. Your blurb is literally "if we had data we can't have, we could predict weather better". DeepMind is irrelevant in that statement--anyone could.
matched #6ckuqxq score:11.51
Search by:
Search by 1 mentions:
(#yrdafza) @New_scientist No, Google does not predict this. "Google AI" has been self-promoting like this for decades. Remember when they used to brag that they could predict the onset of flu season weeks before it started? That silently went away because they got it badly wrong many times and people caught on to how bad their "predictions" actually were.
They can't stop themselves. Anything about AI coming out of big tech companies these days is marketing, not real, and certainly not science.
matched #n4ediia score:11.51
Search by:
Search by 1 mentions:
(#addgj3q) @New_scientist because of course they have.
Emily Bender, a computational linguistic and excellent critic of this generative AI nonsense, uses an analogy of an oil spill to characterize what is happening as a result of generative AI. It's polluting the world with false information, false images, false "academic" articles, false books. The companies that create this stuff are not cleaning up their misinformation spill; they're letting the mess spread all over. It's being used to commit crimes, and that'll only get worse. Just like an out of control oil spill will destroy entire ecosystems.
matched #mt6f6ea score:11.51
Search by:
Search by 1 mentions: