mastodon.uno è uno dei tanti server Mastodon indipendenti che puoi usare per partecipare al fediverso.
Mastodon.Uno è la principale comunità mastodon italiana. Con 77.000 iscritti è il più grande nodo Mastodon italiano: anima ambientalista a supporto della privacy e del mondo Open Source.

Statistiche del server:

6,2K
utenti attivi

📰 "20 kwietnia 2025 roku Profil Medycyna i Środowisko na Facebooku, który zajmuje się upowszechnianiem wiedzy z dziedziny biologii i medycyny środowiskowej opublikował post w którym oskarżył portal informacyjny Onet.pl o wykorzystywanie sztucznej inteligencji (AI) do napisania artykułu w dziale Onet Styl Życia. Artykuł dotyczył hodowania rzeżuchy, jednak w jego bibliografii znajdowały się nieistniejące pozycje naukowe."

pl.wikinews.org/wiki/Onet_zost

pl.wikinews.orgOnet został oskarżony o pisanie nierzetelnych artykułów przy pomocy sztucznej inteligencji. W jednym z artykułów wykorzystywano nieistniejące publikacje naukowe - Wikinews, wolne źródło informacji
#media#Onet#AI

Artificial intelligence isn't funny or playful, but it is confident, whether it's right or wrong. @aftermath.site's Riley Macleod looks at the latest example of this — if you ask it the meaning of a made-up idiom, it will tell you what it means — and why it's a problem. "In one apt example, noted language genius and Defector writer Albert Burneko asked Google to define 'ask a six-headed mouse, get a three-legged stool,' which Google says 'suggests asking the wrong question or seeking the wrong advice from someone unqualified can lead to a nonsensical or unhelpful response.'"

flip.it/UAEe33

flip.it · AI Has Come For The Horse In The Hospital - AftermathGoogle's AI tries to define idioms users made up

I usually don't bother with writing about #AI, but after seeing an interesting post about "The Era of the AI Idiot", it made me think of the "Expert systems" #AI boom in the 1980's.

linkedin.com/posts/sjoshuan_th

www.linkedin.comWe are Living in The Era of the AI Idiot | Salve J. NilsenThis article reminds me of one of the previous "AI Booms" in the 80's – The "Expert systems" boom. Back then, new algorithms and heuristics lead to a new type of AI software that was useful for supporting experts. Think about medical doctors getting assistance in diagnosing complex or difficult cases. One of the learnings from that "boom" was that training these systems required a level of expertise which was very difficult to get hold of (this is called the "Knowledge acquisition" problem). The result of not training these models on good enough data resulted in a higher demand to level of competence among it's users - they became "systems for experts" so to speak. This looks kinda similar to what's going on with LLMs, doesn't it? The difference today is that LLMs are trained on "the Internet" (GIGO), which means it's output is really only useful for experts who are capable of detecting the subtle forms of hallucinations the LLM may produce. …Which leads to an interesting realization: If expertise is required for making an LLM useful, then where do experts come from? (to quote Hans-Petter Fjeld). Training people ("creating experts") with LLMs can be exceptionally dangerous – which is the main point in the linked article.

knowing that one of the current tech policy advisors to gov't is an ex-director of policy at Tony Blair Institute, I'm certain what Blair is describing in this video will make it to the cut as proposition youtu.be/alnNJ3vR9qY

how a reasonable person can campaign for facial recognition – and pulling statistics how it led to more arrests as an argument, considering its proven racial bias – is baffling

404Not Found