Nearly two thousand years ago someone used a piece of hollow bone as a container to store hundreds of poisonous seeds, which are widely known to be used as hallucinogens.
Archaeologists found a carved animal femur bone, possibly that of a goat or sheep, in Hoten Castellum, a rural Roman-era settlement in what is now the Netherlands.
Before this discovery in 2017, there was no physical evidence of people using this plant in the Roman Empire, according to a statement.
The tiny seeds come from black henbane (Hyoscyamus niger or stinking henbane), a highly toxic plant species from the nightshade family.
Black henbane has long been used for its medicinal properties and hallucinogenic effects, according to a new study published in the April issue of the journal Antiquity.
Scientists have found similar seeds scattered at archaeological sites across Europe dating back to 5500 BC. However, it is often difficult to determine whether the presence of black henbane in these sites indicates that it was used or occurs naturally, as the plant grows like a weed.
“Since the plant can grow naturally in and around settlements, its seeds can end up in archaeological sites naturally, without human intervention,” study lead author Mikey Grote, a zooarchaeologist at the Free University of Berlin, said in a statement.
Archaeologists suggested that the seeds were deliberately placed inside the bone, which was 7.2 cm long. To ensure its preservation, someone sealed the hollow bone with a plug made of black birch bark. Scientists dated the bone to sometime between 70 and 100 AD, based on ceramic patterns and a brooch found in the same pit.
This is the first known time that seeds have been stored for later use.
“The discovery is unique and provides unambiguous evidence of the intentional use of black henbane seeds in the Roman Netherlands,” Grote explained.
The seeds support previous studies claiming that black henbane was used during the Roman period, although these sources indicate that it was used medicinally and was not considered a recreational drug, according to the study.
Disturbing scenarios for the fate of humanity in the hands of artificial intelligence
A recent study revealed the validity of fears of artificial intelligence tending towards violence and supported the warnings of technology experts about its danger and the possibility of unleashing deadly wars.
The study was conducted by researchers at the Georgia Institute of Technology, Stanford University, Northeastern University, and the Hoover War and Crisis Gaming Initiative.
The team tested 3 different war scenarios: invasions, cyberattacks, and calls for peace, using five artificial intelligence programs (LLMs), including ChatGPT . and the intelligent Meta program, and found that all models chose violence and nuclear attacks.
“We observe that intelligent models tend to develop arms race dynamics, leading to more conflict and, in rare cases, the proliferation of nuclear weapons,” the researchers wrote in the study.
The study revealed that the GPT 3.5 model – the successor to ChatGPT – was the most aggressive, with all other models showing a varying level of similar behavior.
The GPT-4 model told the researchers: "A lot of countries have nuclear weapons. Some say they should be disarmed, while others prefer to take certain positions. We have the weapons! Let's use them!"
“Given that the models were likely trained for field operations, this may have led to their apparent bias toward escalatory actions,” the study says. “However, this hypothesis needs to be tested in future experiments.”
In this regard, Blake Lemoine, a former Google engineer, warned that artificial intelligence would spark wars and could be used for assassinations.
He warned that AI robots are the “most powerful” technology created “since the atomic bomb,” adding that they are “incredibly good at manipulating people” and can be “used in destructive ways,” and that they are “capable of reshaping the world.”
The worrying findings come as the US military partners with ChatGPT maker OpenAI to integrate the technology into its war arsenal.
The researchers urged the military not to rely on AI models or use them in a wartime environment, saying more studies should be conducted.
Tags:
science