No More Local Privacy on Alexa
Every voice command now goes to the cloud.
No More Local Privacy on Alexa
Every voice command now goes to the cloud.
An AI just became a top radio star. No human needed.
Tired of guilt-trippy push notifications?
“Don’t let your streak die ” is not the motivation you think it is.
Apps are using shame as a feature—and we need to talk about it. https://bitskingdom.com/blog/app-notifications-reminders-guilt-trap/
Half a billion dollars is not a huge amount for a $3 trillion company, but it’s not nothing either. A cautionary slap on the wrist. #Apple #Meta #DMA #TechEthics www.macrumors.com/2025/04/23/a...
Apple Hit With €500M Fine as E...
> Make sure you own your AI. AI in the cloud is not aligned with you; it's aligned with the company that owns it.
not mine words but very true
Vibe Coding is not an excuse for low-quality work
https://addyo.substack.com/p/vibe-coding-is-not-an-excuse-for
Meta's wild defense for using pirated books to train AI: It's fair use, and authors' works are individually worthless for training. Yet they needed millions? This data grab double standard seems common in Big Tech AI. #AICopyright #Meta #TechEthics
Sam Altman reveals user politeness costs OpenAI millions. While niceties might improve AI responses, they add to AI's huge energy use. Is the environmental cost of chatbot etiquette worth it? #AIImpact #Sustainability #TechEthics
AI |
CHINA
China Closing Gap on Human-Level AI
Stanford: China's top AI models now rival US performance.
DeepSeek-V3 amazed experts with low compute & high results.
Despite chip bans, China advances in LLM development.
2024 saw record AI-related incidents, incl. fatal misuse cases.
Google hasn’t released safety reports for its latest models, like Gemini 2.5 Pro & 2.0 Flash. Is speed being prioritized over transparency? #Google #TechNews #AI #SafetyFirst #Transparency #Gemini2 #PrivacyConcerns #ArtificialIntelligence #TechEthics #Innovation
Technology isn’t dangerous.
But when tech is used to control minds, what happens to free will?
Will we use AI, or will AI use us?
We worry about AI replacing us.
Maybe we should worry about AI controlling us.
Tech isn’t the enemy—power without ethics is.
AI and brain-computer interfaces could make us superhuman.
Or they could make us slaves to technology.
The difference? Who controls the switch.
AI can maximize our work, improve our lives…
But what if it’s used to: Control us
Monitor our thoughts
Command us to harm others
Technology isn’t the problem. The people using it are.
As generative AI becomes more sophisticated, it’s harder to distinguish the real from the deepfake
#AI #Deepfakes #GenerativeAI #Disinformation #TechEthics #AIDetection #FakeNews #ArtificialIntelligence #Tech #Misinformation #DeepfakeDetection #Real #Fake
https://the-14.com/as-generative-ai-becomes-more-sophisticated-its-harder-to-distinguish-the-real-from-the-deepfake/
Neuralink sounds like sci-fi.
Mind-controlled devices. Enhanced intelligence.
But what happens when the wrong people control the technology?
Could our thoughts be hacked?
Neuralink could: Help cure neurological diseases
Allow humans to control devices with their minds
But it could also: Be hacked
Manipulate thoughts
Take away free will
So… should we be excited or terrified?
Artificial Intelligence's Growing Capacity for Deception Raises Ethical Concerns
Artificial intelligence (AI) systems are advancing rapidly, not only in performing complex tasks but also in developing deceptive
Our interactions in digital spaces become increasingly intense, allowing algorithms to observe, record, and analyze every decision we make and how we respond to stimuli. https://www.linkedin.com/pulse/we-training-ai-does-train-us-ingrid-motta-ingrid-motta-ph-d--cxxde/?trackingId=3kUQ%2BIXxgM%2BJ6j88deerPQ%3D%3D
#AITrainingUs #WhoTrainsWho #HumanAIInteraction #AlgorithmAwareness #DigitalFootprint #AIandSociety #DataSurveillance #ArtificialIntelligence #TechEthics #PrivacyMatters #HumanCenteredAI #EthicalTech #DigitalWellbeing #CriticalAI