AI firms urged to calculate existential threat amid fears it could escape human control
What if building AI was like building the atomic bomb? MIT’s Max Tegmark thinks it should be. In a chilling new paper, he proposes AI companies calculate a \"Compton constant\"—the odds that a superintelligent AI escapes human control—just like physicists did before the first nuclear test in 1945. Tegmark puts the risk at a staggering 90 percent. Why does this matter for the paper packaging industry? Because AI drives your automation, forecasting, and sustainability tracking. If AI governance fails, so does supply chain intelligence. Think beyond machines—think survival.https://www.theguardian.com/technology/2025/may/10/ai-firms-urged-to-calculate-existential-threat-amid-fears-it-could-escape-human-control
Comments
Post a Comment