About half a century ago, Cornell astronomer Frank Drake conducted Project Ozma, the first systematic SETI survey at the National Radio Astronomy Observatory in Green Bank, West Virginia. From that time onwards, multiple surveys have been conducted on the presence of these “technosignatures”. Technosignatures are what these astronomers call the evidence of advanced life (for example, radio communications).
To put simply, if humanity were to receive a message from an extraterrestrial civilization right at this moment, that would be the greatest event in the history of the human civilization. However, according to a new study, this could also pose some serious risk.
There are multiple possibilities of what the messages could be. Of those possibilities, many are simply the possibility that the messages can be spam or viruses. The paper, titled “Interstellar communication. IX. Message decontamination is impossible“, recently appeared online.
The study was conducted by Michael Hippke, an independent scientist from the Sonneberg Observatory in Germany; and John G. Learned, a professor with the High Energy Physics Group at the University of Hawaii.
Together, they examine some of the foregone conclusions about SETI and what is more likely to be the case.
The notion of an extraterrestrial civilization posing as a threat to mankind and the planet is nothing new. It has been there in science fiction literature and movies for over decades. Even scientists have considered this to be a possibility and probably there are also secret government plans in place regarding what to do if this happens. The fact is that, in this scenario, it seems to be the more probable outcome. Since the risks outweigh the benefits, we have to be prepared with a plan.
If any extraterrestrial communicates with us, we should not communicate back – that is the plan. We should not engage in any form of communication and do our best to hide our planet.
As Learned told Universe Today via email, there has never been a consensus among SETI researchers about whether or not ETI would be benevolent:
“There is no compelling reason at all to assume benevolence (for example that ETI are wise and kind due to their ancient civilization’s experience).
I find much more compelling the analogy to what we know from our history… Is there any society anywhere which has had a good experience after meeting up with a technologically advanced invader? Of course, it would go either way, but I think often of the movie Alien… a credible notion it seems to me.”
In addition, assuming that an alien message could pose a threat to humanity makes practical sense.
Given the size of the observable universe and the limitations imposed by Einstein’s relativity (that there is no faster-than-light travel), it would always be economical for any civilization to send a message that would eradicate an entire civilization than to travel at that location and wage a war. As a result, Hippke and Learned advise that SETI signals be vetted and/or “decontaminated” beforehand.
There are a number of ways in which an alien message can pose a threat.
The message could simply convey misinformation. To put more simply, why should we trust a message sent by an alien civilization who we do not know at all? The message may be designed to cause panic or self-destructive behaviour. This was illustrated in a way in the movie Arrival, where humans wrongly interpreted the message from aliens and thought that the aliens wanted the humans to fight amongst themselves. This can be one of the possible contents of an alien message. Moreover, such messages will not only arrive in just one place on Earth, thus making containment possible.
Messages from aliens will probably have a complex encoding and therefore, it would be impossible for any of us to decrypt it without using a computer. These complex codes may have a self-contained AI that may be designed to cause harm. In this case, what we would require is a message prison to contain the message. Unfortunately, they also acknowledge that no prison would be 100 percent effective and containment could eventually fail.
“This scenario resembles the Oracle-AI, or AI box, of an isolated computer system where a possibly dangerous AI is ‘imprisoned’ with only minimalist communication channels,” they write.
“Current research indicates that even well-designed boxes are useless, and a sufficiently intelligent AI will be able to persuade or trick its human keepers into releasing it.”
In the end, what appears to be a real solution is to simply apply the rules of internet security. If you receive anything in your inbox that is suspiciously large, or has some form of cryptic content, or maybe has some lucrative offer that is too good to be true you simply delete it.
Source: Universe Today