'Doom calculator': Scientists develop AI algorithm that can predict death within 4 years
Researchers from Denmark and the U.S. developed a groundbreaking artificial intelligence algorithm, "a doom calculator", that demonstrated an impressive accuracy rate of over 75% in predicting whether one would pass away within a four-year period, according to USA Today.
"The whole story of a human life, in a way, can also be thought of as a giant long sentence of the many things that can happen to a person,” notes Sune Lehmann, a professor of networks and complexity science at the Technical University of Denmark.
The project was recently featured in the Nature Computational Science online journal. The researchers developed an AI machine-learning transformer model named life2vec, similar to ChatGPT but without interactive capabilities.
This innovative model processed extensive data, including age, health, education, employment, income, and other life events, obtained from a dataset of over 6 million people in Denmark, provided by the country's government, as it actively collaborated on the research.
The life2vec model was trained to analyze information about people's lives presented in sentences such as "In September 2012, Francisco received 20,000 Danish kroner as a guard at a castle in Elsinore" or "During her third year at secondary boarding school, Hermione followed five elective classes."
But the life2vec evolved and developed the capability to construct "individual human life trajectories," as the paper says.
AI phenomenon
About a year ago, OpenAI company introduced ChatGPT, an AI-powered chatbot that allows users to get almost all requested data. It is a revolutionary technology, as ordinary users implement artificial intelligence in their everyday lives.
It also triggered a wave of similar developments, including Google Bard, Google Gemini, and Samsung's AI Gauss. However, some concerns arose around the shift in working forces, as millions of employees became replaceable or unnecessary.
Moreover, OpenAI researchers warn about the development of a powerful artificial intelligence that, in their opinion, "could pose a threat to humanity." So, the company decided to make guidelines to assess potential AI catastrophic risks.