Banner image showing red cartoon crabs and green virus icons floating over a blue digital circuit-board background, with the headline “OpenClaw’s Security Crisis: Malicious Skills Stealing Users’ Data” displayed prominently in the center.

OpenClaw Security Crisis: Hundreds of Malicious Skills Found on ClawHub

On February 1, 2026, cybersecurity firm Koi Security published a report called “ClawHavoc.” They had reviewed every skill on ClawHub and found 341 that were designed to steal data from users. The attack was not random. The fake skills targeted people who use OpenClaw for cryptocurrency trading and productivity automation. They had convincing names, working descriptions, and some even had partially functional code. At a glance, they looked real. One attacker account, “hightower6eu,” was responsible for 314 of the 341 malicious skills. A few other accounts (zaycv, Aslaep123, and a GitHub user called aztr0nutzs) published the rest. Koi Security actually used an OpenClaw bot called “Alex” to help with the […]

Graduation cap being tossed into the air against a clear blue sky, with the text overlay: "Is AI Taking Your Job Or Helping You Build One?"

Z世代はAIに取って代わられることを恐れるべきか?

もしあなたが今後数年で大学を卒業するのであれば、誰もが予測できないほどのスピードで変化する雇用市場に足を踏み入れることになる。一部のAIリーダーによれば、エントリーレベルのホワイトカラーの仕事の半分は5年以内に自動化される可能性がある一方、この世代は一人起業家として10億ドル規模の企業を立ち上げるのに役立つツールを利用できると主張する人もいる。これらの予測の隔たりは、仕事の未来がいかに不確かなものになっているかを明らかにしている。セールスフォースは最近、AI能力を理由に4,000人の雇用を削減した。他の企業も、人工知能が仕事を処理できる特定の職務を廃止しようとしている。一方、若い起業家たちは、こうした [...] 続きを読む

A group of AI safety activists protest in front of a modern glass building labeled “OpenAI” in Silicon Valley. Protesters hold cardboard signs that read “ETHICS OVER PROFIT” and “AI NEEDS REGULATION.” One woman speaks passionately into a megaphone. A drone hovers above the crowd, and people appear serious and determined. Bold white and yellow text over the image reads: “10 Days Without Food To Stop AGI?” The scene conveys urgency and resistance against unchecked AI development.

反AI活動家、AnthropicとDeepMindでハンスト中

人の反AI活動家が現在、大手AI企業本社の前で10日以上にわたるハンストを行っている。ギド・ライヒスタッターはAnthropic社のオフィスの外で、マイケル・トラッツィはDeepMind社で抗議活動をしている。両者とも、人工知能への道は人類に存亡の危機をもたらすと主張している。彼らの主張によれば、AI企業は世界中の家族や子供たちに危害を加えかねない危険な領域に向かって突き進んでいるという。この抗議行動は、現在のAI開発は無謀であり、破滅的な事態を招きかねないと考える人々の、大きくなりつつあるがまだ小さな運動を代表するものである。ライヒスタッターは2児の父であり、以前は28時間にわたって橋の上で抗議活動を行った。

Illustration of a young woman holding a smartphone, with her face split in half — one side smiling and bathed in warm red light with floating heart icons, the other side cracked, pale, and sad, shown in cold blue tones with digital glitches and binary code in the background. The text overlay reads: “Is This End of Real Dating? Millions Are in Love With AI!”

New Study Finds Romantic AI Chatbots Lead To Poorer Mental Health

One in four young adults is now having regular conversations with AI romantic partners, spending an average of 50 minutes per week chatting with artificial companions. This is happening right now, according to a new research lead by Brian Willoughby, a professor at Brigham Young University. The team surveyed nearly 3,000 American adults about their use of romantic AI technologies. The results show that 19% of all adults have interacted with AI chatbots designed as romantic partners, while 13% actively seek out AI-generated social media accounts featuring idealized virtual people. Also, people in committed relationships are actually more likely to use romantic AI than single people, challenging assumptions about loneliness […]

DeepMind Co-Founder Predicts AGI by 2028 And Says Humanity Is at Risk

Artificial General Intelligence (AGI) is often described as a game-changer for humanity—a technology capable of thinking, learning, and solving problems like humans. It promises breakthroughs in medicine, climate solutions, and beyond. But alongside these possibilities lie dire warnings about the risks it poses, including the potential to threaten humanity’s very existence. Shane Legg, co-founder of DeepMind, is one of the experts raising the alarm. He predicts that AGI could arrive sooner than most people think and warns it carries a significant chance of catastrophic consequences. Just like Elon Musk, another vocal critic of AI, Legg highlights the dangers of unregulated development and the potential for devastating outcomes. In this article, […]

P(Doom)とは何か、AIが人類を滅ぼす可能性は?

AIの台頭を追っていると、“P(doom) ”という言葉に出くわすだろう。一見すると、暗号のようなジョークか、ディストピアSF映画のタイトルのように聞こえる。しかし、この概念は笑い事ではない。人工知能、特に人工知能全般(AGI)に関連する実存的リスクによって破滅的な結果がもたらされる確率を数値化した指標なのだ。P(doom)は、研究者間の気軽な雑談から世界的な議論の焦点となり、人類の未来に関する熱い議論を巻き起こしてきた。では、P(doom)とはいったい何なのか、なぜ気にする必要があるのか、専門家は何を言っているのか。このウサギの穴は深いので、シートベルトを締めてください。P(ドゥーム)とは何か?簡単に言えば、P(doom)は確率を表している。