Graduation cap being tossed into the air against a clear blue sky, with the text overlay: "Is AI Taking Your Job Or Helping You Build One?"

Z世代はAIに取って代わられることを恐れるべきか?

もしあなたが今後数年で大学を卒業するのであれば、誰もが予測できないほどのスピードで変化する雇用市場に足を踏み入れることになる。一部のAIリーダーによれば、エントリーレベルのホワイトカラーの仕事の半分は5年以内に自動化される可能性がある一方、この世代は一人起業家として10億ドル規模の企業を立ち上げるのに役立つツールを利用できると主張する人もいる。これらの予測の隔たりは、仕事の未来がいかに不確かなものになっているかを明らかにしている。セールスフォースは最近、AI能力を理由に4,000人の雇用を削減した。他の企業も、人工知能が仕事を処理できる特定の職務を廃止しようとしている。一方、若い起業家たちは、こうした [...] 続きを読む

A group of AI safety activists protest in front of a modern glass building labeled “OpenAI” in Silicon Valley. Protesters hold cardboard signs that read “ETHICS OVER PROFIT” and “AI NEEDS REGULATION.” One woman speaks passionately into a megaphone. A drone hovers above the crowd, and people appear serious and determined. Bold white and yellow text over the image reads: “10 Days Without Food To Stop AGI?” The scene conveys urgency and resistance against unchecked AI development.

反AI活動家、AnthropicとDeepMindでハンスト中

人の反AI活動家が現在、大手AI企業本社の前で10日以上にわたるハンストを行っている。ギド・ライヒスタッターはAnthropic社のオフィスの外で、マイケル・トラッツィはDeepMind社で抗議活動をしている。両者とも、人工知能への道は人類に存亡の危機をもたらすと主張している。彼らの主張によれば、AI企業は世界中の家族や子供たちに危害を加えかねない危険な領域に向かって突き進んでいるという。この抗議行動は、現在のAI開発は無謀であり、破滅的な事態を招きかねないと考える人々の、大きくなりつつあるがまだ小さな運動を代表するものである。ライヒスタッターは2児の父であり、以前は28時間にわたって橋の上で抗議活動を行った。

Illustration of a young woman holding a smartphone, with her face split in half — one side smiling and bathed in warm red light with floating heart icons, the other side cracked, pale, and sad, shown in cold blue tones with digital glitches and binary code in the background. The text overlay reads: “Is This End of Real Dating? Millions Are in Love With AI!”

New Study Finds Romantic AI Chatbots Lead To Poorer Mental Health

One in four young adults is now having regular conversations with AI romantic partners, spending an average of 50 minutes per week chatting with artificial companions. This is happening right now, according to a new research lead by Brian Willoughby, a professor at Brigham Young University. The team surveyed nearly 3,000 American adults about their use of romantic AI technologies. The results show that 19% of all adults have interacted with AI chatbots designed as romantic partners, while 13% actively seek out AI-generated social media accounts featuring idealized virtual people. Also, people in committed relationships are actually more likely to use romantic AI than single people, challenging assumptions about loneliness […]

Three prominent AI billionaires stand in front of a high-security mansion compound with underground bunkers, a helipad, and lush greenery. Bold text reads: “Why Are the AI Billionaires Building Bunkers?” with "Building Bunkers?" highlighted in red.

Top AI Executives Are Secretly Building Bunkers – Should We Be Worried?

It sounds like something from a sci-fi movie: the same tech billionaires shaping our digital future are quietly preparing for a much darker one. From private bunkers in Hawaii to fortified basements and New Zealand hideouts, several AI leaders are investing heavily in survival infrastructure. Some of the most influential people in tech are quietly preparing for worst-case scenarios. They are heavily involved in AI, automation, and platforms that billions rely on daily. Their decisions to invest in bunkers or remote shelters raise a practical question: why now, and what are they preparing for? Let’s break down what’s really going on—who’s building what, where, and why—and then explore the bigger picture […]

Text graphic stating "76% of AI Models Fail This Safety Benchmark!" with logos of ChatGPT, Gemini, Claude, and Grok over a blurred background of the Aymara LLM Risk & Responsibility Matrix heatmap in green, yellow, and red tones.

76% of Top AI Models Fail Basic Safety Tests — How Safe Is Yours?

New research reveals that even the most popular AI models can’t be trusted when it comes to basic safety. The most powerful LLMs today — including models from OpenAI, Google, and Cohere — were put through a rigorous safety benchmark. The results? Not great. Out of 20 models tested across 10 real-world risk areas, none passed all the tests, and 76% failed one of the most basic challenges: impersonation and privacy violations. If you’re building with AI or using it in your product, this should make you pause. Because once something goes wrong, you’re the one holding the bag — not the model provider. The Aymara Matrix: Safety Benchmark for LLMs The research comes […]