AI firms urged to calculate existential threat amid fears it could escape human control

What if building AI was like building the atomic bomb? MIT’s Max Tegmark thinks it should be. In a chilling new paper, he proposes AI companies calculate a \"Compton constant\"—the odds that a superintelligent AI escapes human control—just like physicists did before the first nuclear test in 1945. Tegmark puts the risk at a staggering 90 percent. Why does this matter for the paper packaging industry? Because AI drives your automation, forecasting, and sustainability tracking. If AI governance fails, so does supply chain intelligence. Think beyond machines—think survival.https://www.theguardian.com/technology/2025/may/10/ai-firms-urged-to-calculate-existential-threat-amid-fears-it-could-escape-human-control

Comments

Popular posts from this blog

The Middle East Has Entered the AI Group Chat

US-Saudi $142 Billion Defense Deal Sparks Questions, Few Answers

Student Busts Teacher Using AI, Demands Tuition Refund