mRNA Vaccines

Many of the COVID-19 vaccines currently being fast-tracked are not conventional vaccines. Their design is aimed at manipulating your very biology, and therefore have the potential to alter the biology of the entire human race. The science behind conventional vaccines is to train your body to recognize and respond to the proteins of a particular virus by injecting a small amount of the actual viral protein into your body, thereby triggering an immune response and the development of antibodies.

This is not what happens with an mRNA vaccine. The theory behind these vaccines is that when you inject the mRNA into your cells, it will stimulate your cells to manufacture their own viral protein. The mRNA COVID-19 vaccine will be the first of its kind. No mRNA vaccine has ever been licensed before. And, to add insult to injury, they’re forgoing all animal safety testing.

Dr. Carrie Madej reviews some of the background of certain individuals participating in the race for a COVID-19 vaccine, which include Moderna co-founder Derrick Rossi, a Harvard researcher who successfully reprogrammed stem cells using modified RNA, thus changing the function of the stem cells. Moderna was founded on this concept of being able to modify human biological function through genetic engineering, Madej says.

As mentioned, the mRNA vaccines are designed to instruct your cells to make the SARS-CoV-2 spike protein, the glycoprotein that attaches to the ACE2 receptor of the cell. This is the first stage of the two-stage process viruses use to gain entry into cells.

The idea is that by creating the SARS-CoV-2 spike protein, your immune system will mount a response to it and begin producing antibodies to the virus. However, as reported by The Vaccine Reaction, researchers have pointed out potential weaknesses:

According to researchers at University of Pennsylvania and Duke University, mRNA vaccines have potential safety issues, including local and systemic inflammation and stimulation of auto-reactive antibodies and autoimmunity, as well as development of edema (swelling) and blood clots.

Systemic inflammation, auto-reactive antibodies and autoimmune problems are not insignificant concerns. In fact, these are in large part why previous attempts to create a coronavirus vaccine have ALL failed.

Over the past 20 years, coronavirus vaccine research has been plagued by one consistent adverse outcome in particular, namely paradoxical immune enhancement. This is caused by the fact that coronaviruses produce two different types of antibodies—neutralizing antibodies5 that fight the infection, and binding antibodies6 (also known as nonneutralizing antibodies) that cannot prevent viral infection.

Incapable of preventing viral infection, binding antibodies can instead trigger paradoxical immune enhancement. What that means is that it looks good until you get the disease, and then it makes the disease far worse than it would have been otherwise. As detailed in my interview with Robert F. Kennedy Jr., in one coronavirus vaccine trial using ferrets, all the vaccinated animals died when exposed to the actual virus.

According to Madej, animal studies have also found the type of mRNA technology introduced with this vaccine can increase the risk of cancer and mutagenesis (gene mutations).

What You Need to Know About the Delivery System

Madej goes on to discuss how this mRNA vaccine is going to be administered. Rather than a conventional injection, the vaccine will be administered using a microneedle platform. Not only can it be mass produced quickly, but it can also be administered by anyone. It’s as simple at attaching an adhesive bandage to your arm.

The adhesive side of the bandage has rows of tiny microneedles and a hydrogel base that contains luciferase enzyme and the vaccine itself. Because of their tiny size, the microneedles are said to be nearly painless when pressed into the skin.

The idea is that the microneedles will puncture the skin, delivering the modified synthetic RNA into the nucleus of your cells. RNA is essentially coding material that your body uses. In this case, as mentioned, the instructions are to produce the SARS-CoV-2 viral protein.

The problem with all of this, Madej notes, is that they’re using a process called transfection — a process used to create genetically modified organisms. She points out that research has confirmed GMO foods are not as healthy as conventional unmodified foods. The question is, might we also become less healthy? “Vaccine manufacturers have stated that this will not alter our DNA, our genome,” Madej says.

I say that is not true. Because if we use this process to make a genetically modified organism, why would it not do the same thing to a human? I don’t know why they’re saying that.

If you look at the definition of transfection, it will tell you that it can be a temporary change in the cell. And I think that is what the vaccine manufacturers are banking on.

Or, it’s a possibility for it to become stable, to be taken up into the genome, and to be so stable that it will start replicating when the genome replicates. Meaning it is now a permanent part of your genome. That’s a chance that we’re taking. It could be temporary, or it could be permanent.

Patentable DNA, Luciferase and Nanotechnology

Naturally, we won’t find out the truth about whether the vaccine causes a temporary or permanent change for many years after the experimental vaccine is introduced, and that’s an important piece of information.

Why? Because synthetic genes can be patented. So, if inserting a synthetic RNA ends up creating permanent changes in the genome, humans will contain patentable genes. What will that mean for us, seeing how patents have owners, and owners have patent rights?

Another part of the delivery system that raises its own set of questions is the use of the enzyme luciferase, which has bioluminescent qualities. While invisible under normal conditions, using a cellphone app or special device, you will be able to see a glowing vaccination mark.

As described in the journal RSC Advances7 in 2015, luciferase gene-loaded quantum dots “can efficiently deliver genes into cells.” The abstract discusses their use as “self-illuminating probes for hepatoma imaging,” but the fact that quantum dots can deliver genetic material is interesting in itself.

The hydrogel, meanwhile, is a DARPA invention that involves nanotechnology and nanobots. This “bioelectronic interface” is part of how the vaccination mark will be able to connect to your smartphone, Madej says, providing information about blood sugar, heart rate and any number of other biological data.

“It has the potential to see almost anything that goes on in your body,” Madej says. This will have immediate ramifications for our privacy, yet no one has yet addressed where this information will be going. Who will collect and have access to all this data? Who will be responsible for protecting it? How will it be used?

Also, if your cellphone can receive information from your body, what information can your body receive from it, or other sources? Could transmissions affect our mood? Our behavior? Our physical function? Our thoughts or memories?

Stepping Into Transhumanist Territory

In his Forbes article, Sahota quotes Kurzweil’s book “The Singularity Is Near: When Humans Transcend Biology,” in which Kurzweil states:

The Singularity will represent the culmination of the merger of our biological thinking and existence with our technology, resulting in a world that is still human but that transcends our biological roots.

If Madej turns out to be correct, and the mRNA vaccine ushers in the ability to alter not only our genes but also opens the door for nanotechnology-driven interfacing between our bodies and programmable technology, aren’t we in fact stepping over the line into transhuman territory?

The Truthstream Media video above discusses the larger issues of transhumanism and the race to merge man with machine and artificial intelligence. There are even ongoing attempts to upload the human mind into the cloud, ultimately creating a form of “digital hive mind” where everyone communicates via “Wi-Fi telepathy.” This, despite the fact we still do not fully understand what “the mind” actually is, or where it’s located.

Neuralink—A Psychiatric Disaster in the Making?

Another transhumanist that has recently brought us to a brand-new precipice is Elon Musk, with his latest venture, Neuralink, described in the video presentation given in late August, above. Neuralink is a transcranial implant that uses direct current stimulation. For now, the device is aimed at helping people with brain or spinal injuries.

Ultimately, the goal is to merge the human brain with computers. I have strong reservations about this. There’s tremendous room for unintended psychological and psychiatric consequences. In an interview that I did with psychiatrist Dr. Peter Breggin that has not yet been published, he discussed his concerns with this technology, saying:

What’s interesting to me is that while Musk is so brilliant, he’s stupid about the brain. That’s probably because the neurosurgeons and psychiatrists he consults are stupid about the brain. I mean they’re just stupid.

He wants to put in multiple threadlike electrodes into the brain, into webs of neurons, and put in low voltage stimulation. This is insane. The brain can’t tolerate this. He hopes to [be able to] communicate but there’s not going to be any communication.

The brain isn’t going to talk to these electrodes. That’s not how the brain works. The brain talks to itself. It’s not going to talk to Elon Musk [or anyone else] and he’s going to disrupt the brain talking to itself. It’s a terrible thing to do.

I wish somebody who knows Elon Musk would say, ‘You ought to talk to Peter Breggin. He says your consultants are stupid.’ He’s already planning to try to get FDA approval for some neurological disorders and that’ll be the beginning of the onslaught.

Is Transhumanism Inevitable?

Getting back to the mRNA vaccines, time will tell just how hazardous they end up being. Clearly, if the changes end up being permanent, the chance of long-term side effects is much greater than if they end up being temporary.

In a worst-case scenario, whatever changes occur could even be generational. The problem is these issues won’t be readily apparent any time soon. In my view, this vaccine could easily turn into a global catastrophe the likes of which we’ve never experienced before.

We really should not be quick to dismiss the idea that these vaccines may cause permanent genetic changes, because we now have proof that even conventional vaccines have the ability to do that, and they don’t involve the insertion of synthetic RNA.

Fast-Tracked Swine Flu Vaccine Caused Genetic Alterations

After the H1N1 swine flu of 2009, the ASO3-adjuvanted swine flu vaccine Pandemrix (a fast-tracked vaccine used in Europe but not in the U.S. during 2009-2010) was causally linked to childhood narcolepsy, which abruptly skyrocketed in several countries.

Children and teens in Finland, the U.K. and Sweden were among the hardest hit. Further analyses discerned a rise in narcolepsy among adults who received the vaccine as well, although the link wasn’t as obvious as that in children and adolescents.

A 2019 study reported finding a “novel association between Pandemrix-associated narcolepsy and the non-coding RNA gene GDNF-AS1”—a gene thought to regulate the production of glial cell line-derived neurotrophic factor or GDNF, a protein that plays an important role in neuronal survival.

They also confirmed a strong association between vaccine-induced narcolepsy and a certain haplotype, suggesting “variation in genes related to immunity and neuronal survival may interact to increase the susceptibility to Pandemrix-induced narcolepsy in certain individuals.”

In addition to that, there’s the research showing that the H1N1 swine flu vaccine was one of five inactivated vaccines that increased overall mortality, especially among girls. A swine flu article I wrote 11 years ago, in 2009, turned out to have a rather prophetic warning at the end:

The swine flu vaccine has not been tested for safety or efficacy, but we DO know it will contain harmful additives. The choice, to me, is obvious. And in the future, anytime a new ‘pandemic’ appears and officials urge you to rush out and get a shot, please remember this article and ask yourself if it’s really you who stands to benefit from their advice.

The Swine Flu Fraud of 1976

We can also learn from the swine flu fiasco of 1976, detailed in this 1979 60 Minutes episode. Fearing a repeat of the 1918 Spanish flu pandemic, “the government propaganda machine cranked into action,” 60 Minutes says, telling all Americans to get vaccinated.

According to 60 Minutes, 46 million Americans were vaccinated against the swine flu at that time. Over the next few years, thousands of Americans filed vaccine damage claims with the federal government. As reported by Smithsonian Magazine in 2017:

In the spring of 1976, it looked like that year’s flu was the real thing. Spoiler alert: it wasn’t, and rushed response led to a medical debacle that hasn’t gone away.

“Some of the American public’s hesitance to embrace vaccines—the flu vaccine in particular—can be attributed to the long-lasting effects of a failed 1976 campaign to mass-vaccinate the public against a strain of the swine flu virus,’ writes Rebecca Kreston for Discover.

This government-led campaign was widely viewed as a debacle and put an irreparable dent in future public health initiative, as well as negatively influenced the public’s perception of both the flu and the flu shot in this country.

A 1981 report by the U.S. General Accounting Office to Sen. John Durkin, D-N.H., reads, in part:

Before the swine flu program there were comparatively few vaccine-related claims made against the Government. Since 1963, Public Health Service records showed that only 27 non-swine flu claims were filed.

However, as of December 31, 1979, we found that 3,839 claims and 988 lawsuits had been filed against the Government alleging injury, death, or other damage resulting from the 45 million swine flu immunizations given under the program.

A Justice official told us that as of October 2, 1980, 3,965 claims and 1,384 lawsuits had been filed. Of the 3,965 claims filed, the Justice official said 316 claims had been settled for about $12.3 million …

The devastating side effects of the Pandemrix vaccine should be instructive. No one anticipated a flu vaccine to have genetic consequences, yet it did. Now they’re proposing injecting mRNA to make every single cell in your body produce the SARS-CoV-2 spike protein.

It seems outright foolish not to assume there will be significant consequences.

Source: https://thevaccinereaction.org/2020/09/will-new-covid-vaccine-make-you-transhuman/ with references)

NASA War Document

or officially the ‘Future Strategic Issues / Future Warfare 2025‘ is a July 2001 powerpoint presentation by Dennis Bushnell, the Chief Scientist at the NASA Langley Research Center in Virginia, that was found on the NASA website. Participants in the presentation, according to the slides included the US Air Force, DARPA, the CIA, FBI, Southern Command, Atlantic Command, the Australian DoD, and others… The presentation was meant to incite discussion in all cases upon existing data, trends, analyses, and technologies (no pixie dust). It is about robots, cyborgs, and humans.

View Full PDF Here…

Slide #9 says that Humans have taken over and vastly shortened Evolution of: (1) the planet, citing global warming, pollution, and deforestation (2) of the human species (genetic manipulation and mind children– ie. microchipped superhumans), as well as (3) products and life forms such as cross-species molecular breeding and ‘directed evolution’ giving the example of Maxygen Inc., a molecular breeding company.

Slide #19 shows the strengths / weaknesses of human mind vs computers:

Slide #20 reveals a U.S. Human Brain Project that was conducted by the NIH, NSF, NASA, the DOD and DOE, however information on this research program is not readily available on the internet. There is a UK Human Brain Project, but it is not the same.

Slide #21 features a discussion on artificial intelligence to generate new ideas and concepts including mention of the 1994 ‘Creativity Machine’:

In 1994, Imagination Engines Inc. announced the accomplishment through a patent called the “Creativity Machine,” a computational paradigm, already 20 years in the making, that came the closest yet to emulating the fundamental neurobiological mechanisms responsible for idea formation. Appropriately, the resulting patent’s abstract (US 5,659,666) reads as follows:

A device for simulating human creativity employing a neural network trained to produce input-output maps within some predetermined knowledge domain, an apparatus for subjecting the neural network to perturbations that produce changes in the predetermined knowledge domain, the neural network having an optional output for feeding the outputs of the neural network to a second neural network that evaluates and selects outputs based on training within the second neural network. The device may also include a reciprocal feedback connection from the output of the second neural network to the first neural network to further influence and change what takes place in the aforesaid neural network.

Slide #35:

Slide #44 – Micro Dust Weaponry is now referred to as Smart Dust. Smart dust devices are small wireless microelectromechanical sensors (MEMS) that can detect everything from light to vibrations. It is a tiny dust size device with extraordinary capabilities. It encompasses a nano-structured silicon sensor that can spontaneously assemble, orient sense, and report on their local environment. This new technology combines sensing, computing, wireless communication capabilities, and autonomous power supply within the volume of only a few millimeters. It is very hard to detect the presence of the Smart Dust and it is even harder to get rid of them once deployed. Smart Dust is useful in monitoring real-world phenomena without disturbing the original process.

Slide #45:

Slide 50:

Targeted Individuals are real!

 

Slide #55:

Slide #66:

Slide #67:

DARPA Innovative Defense Technologies:

 

 

Slide #93 displays Exploit “CNN Effect” which likely refers to the ability of CNN, a 24/7 news network to disperse worldwide propaganda faster than any other method. “Capture/torture Americans in living color on prime time” is a concerning bullet point to say the least, as are both of the other two bullet points regarding domestic (CONUS = continental U.S.) terror attacks via biological warfare (COVID-19, SARS, etc.), EMP, and radiofrequency (such as 5G).

BELOW: Deborah Tavares is an expert on the NASA document and has been warning about its contents for several years:

IARPA

The Intelligence Advanced Research Projects Activity (IARPA) is a division of the Office of the Director of National Intelligence (ODNI) that, according their website, “invests in high-risk, high-payoff research programs to tackle some of the most difficult challenges of the agencies and disciplines in the Intelligence Community.” They collaborate across the intelligence community to ensure that research addresses relevant future needs. This cross-community focus ensures their ability to: (1) address cross-agency challenges, (2) leverage both operational and R&D expertise from across the IC, and (3) coordinate transition strategies with agency partners. IARPA does not have an operational mission and does not deploy technologies directly to the field. Instead, IARPA facilitates the transition of research results to IC customers for operational application.

IARPA is led by a distinguished group of accomplished scientists and researchers. It is modeled after the Defense Advanced Research Projects Agency (DARPA), and was established in 2006 with the mandate to:

  • conduct cross-community research
  • target new opportunities and innovations
  • generate revolutionary capabilities

IARPA was tasked to accomplish these objectives by drawing upon the technical and operational expertise that resides within the intelligence agencies. This ensured IARPA’s programs will be uniquely designed to anticipate the long-term needs of, and provide research and technical capabilities for, the Intelligence Community.

Internet

an electronic communications network that connects computer networks and organizational computer facilities around the world. Libertarians often cite the internet as a case in point that liberty is the mother of innovation. Opponents quickly counter that the internet was a government program, proving once again that markets must be guided by the steady hand of the state. In one sense the critics are correct, though not in ways they understand. The internet indeed began as a typical government program, the ARPANET, designed to share mainframe computing power and to establish a secure military communications network. Advanced Research Projects Agency (ARPA), now DARPA, of the United States Department of Defense, funded the original network.

Of course the designers could not have foreseen what the (commercial) internet has become. Still, this reality has important implications for how the internet works — and explains why there are so many roadblocks in the continued development of online technologies. It is only thanks to market participants that the internet became something other than a typical government program: inefficient, overcapitalized, and not directed toward socially useful purposes.

In fact, the role of the government in the creation of the internet is often understated. The internet owes its very existence to the state and to state funding. The story begins with ARPA, created in 1957 in response to the Soviets’ launch of Sputnik and established to research the efficient use of computers for civilian and military applications.

As the term suggests, using computers would no longer be restricted to a static one-way process but would be dynamically interactive. According to the standard histories, the man most responsible for defining these new goals was J. C. R. Licklider. A psychologist specializing in psychoacoustics, he had worked on early computing research, becoming a vocal proponent for interactive computing. His 1960 essay “Man-Computer Symbiosis” outlined how computers might even go so far as to augment the human mind. Licklider, known by friends, colleagues, and casual acquaintances as “Lick,” was the first to describe the concept he called the “Galactic Network.”

It just so happened that funding was available. Three years earlier in 1957, the Soviet launch of Sputnik had sent the US military into a panic. Partially in response, the Department of Defense (DoD) created a new agency for basic and applied technological research called the Advanced Research Projects Administration (ARPA, today known as DARPA). The agency threw large sums of money at all sorts of possible — and dubious — research avenues, from psychological operations to weather control. Licklider was appointed to head the Command and Control and Behavioral Sciences divisions, presumably because of his background in both psychology and computing.

In the paper “Man-Computer Symbiosis,” published in 1960, Licklider provided a guide for decades of computer research to follow.  He envisioned a globally interconnected set of computers through which everyone could quickly access data and programs from any site.
In October 1962, Licklider was appointed head of the Information Processing Techniques Office (IPTO) at ARPA, the United States Department of Defense Advanced Research Projects Agency. The IPTO funded the research that led to the development of the ARPANET.

During his time as director of ARPA’s Information Processing Techniques Office (IPTO) Licklider funded a research project headed by Robert Fano at MIT called Project MAC, a large mainframe computer that was designed to be shared by up to 30 simultaneous users, each sitting at a separate typewriter terminal. Project MAC (the Project on Mathematics and Computation) would develop groundbreaking research in operating systems, artificial intelligence, and the theory of computation.

Licklider sought out the leading computer research institutions in the U.S. and set up research contracts with them. Soon there were about a dozen universities and companies working on ARPA contracts including Stanford, UCLA, and Berkeley. Lick jokingly nicknamed his group the Intergalactic Computer Network. This group would later form the core who created the ARPANET.

Lick left ARPA in 1964. Lick never implemented his vision at ARPA. Licklider moved on, but he left behind his vision of a universal network in others. In a few years after leaving ARPA in 1964, Licklider’s ideas were implemented with creation of the ARPANET.

Its members realized that the big computers scattered around university campuses needed to communicate with one another, much as Licklider had discussed in his 1960 paper. In 1967, one of his successors at ARPA, Robert Taylor, formally funded the development of a research network called the ARPANET. At first the network spanned only a handful of universities across the country. By the early 1980s, it had grown to include hundreds of nodes. Finally, through a rather convoluted trajectory involving international organizations, standards committees, national politics, and technological adoption, the ARPANET evolved in the early 1990s into the internet as we know it.

Larry Roberts, the principal architect of the ARPANET, would give credit to Licklider’s vision, “The vision was really Lick’s originally. … he sat down with me and really convinced me that it was important and convinced me into making it happen”

Yasha Levine’s important new book, Surveillance Valley, deftly demonstrates the history of the big tech firms, complete with its panoptic overtones, is thoroughly interwoven with the history of the repressive state apparatus. While many people may be at least nominally aware of the links between early computing, or the proto-Internet, and the military, Levine’s book reveals the depth of these connections and how they persist. As he provocatively puts it, “the Internet was developed as a weapon and remains a weapon today”

Thus, cases of Google building military drones and silencing dissenting voices , Facebook watching us all, and Amazon making facial recognition software for the police, need to be understood not as aberrations. Rather, they are business as usual.

Levine believes that he has unearthed several new pieces of evidence that undercut parts of this early history, leading him to conclude that the internet has been a surveillance platform from its inception.

Levine begins his account with the war in Vietnam, and the origins of a part of the Department of Defense known as the Advanced Research Projects Agency (ARPA) – an outfit born of the belief that victory required the US to fight a high-tech war. ARPA’s technocrats earnestly believed “in the power of science and technology to solve the world’s problems” (23), and they were confident that the high-tech systems they developed and deployed (such as Project Igloo White) would allow the US to triumph in Vietnam. And though the US was not ultimately victorious in that conflict, the worldview of ARPA’s technocrats was, as was the linkage between the nascent tech sector and the military. Indeed, the tactics and techniques developed in Vietnam were soon to be deployed for dealing with domestic issues, “giving a modern scientific veneer to public policies that reinforced racism and structural poverty” (30).

Much of the early history of computers, as Levine documents, is rooted in systems developed to meet military and intelligence needs during WWII – but the Cold War provided plenty of impetus for further military reliance on increasingly complex computing systems. And as fears of nuclear war took hold, computer systems (such as SAGE) were developed to surveil the nation and provide military officials with a steady flow of information. Along with the advancements in computing came the dispersion of cybernetic thinking which treated humans as information processing machines, not unlike computers, and helped advance a worldview wherein, given enough data, computers could make sense of the world. All that was needed was to feed more, and more, information into the computers – and intelligence agencies proved to be among the first groups interested in taking advantage of these systems.

While the development of these systems of control and surveillance ran alongside attempts to market computers to commercial firms, Levine’s point is that it was not an either/or situation but a both/and, “computer technology is always ‘dual use,’ to be used in both commercial and military applications” (58) – and this split allows computer scientists and engineers who would be morally troubled by the “military applications” of their work to tell themselves that they work strictly on the commercial, or scientific side.

During the 1960s, the RAND Corporation had begun to think about how to design a military communications network that would be invulnerable to a nuclear attack. Paul Baran, a RAND researcher whose work was financed by the Air Force, produced a classified report in 1964 proposing a radical solution to this communication problem. Baran envisioned a decentralized network of different types of “host” computers, without any central switchboard, designed to operate even if parts of it were destroyed. The network would consist of several “nodes,” each equal in authority, each capable of sending and receiving pieces of data.

Each data fragment could thus travel one of several routes to its destination, such that no one part of the network would be completely dependent on the existence of another part. An experimental network of this type, funded by ARPA and thus known as ARPANET, was established at four universities (using 4 computers) in 1969.

From Wikipedia:

The first successful message on the ARPANET was sent by UCLA student programmer Charley Kline, at 10:30 pm on 29 October 1969, from Boelter Hall 3420. Kline transmitted from the university’s SDS Sigma 7 Host computer to the Stanford Research Institute’s SDS 940 Host computer. The message text was the word login; on an earlier attempt the l and the o letters were transmitted, but the system then crashed. Hence, the literal first message over the ARPANET was lo. About an hour later, after the programmers repaired the code that caused the crash, the SDS Sigma 7 computer effected a full login. The first permanent ARPANET link was established on 21 November 1969, between the IMP at UCLA and the IMP at the Stanford Research Institute. By 5 December 1969, the entire four-node network was established.

Levine focuses on the privatization of the network, the creation of Google, and revelations of NSA surveillance. And, in the final part of his book, he turns his attention to Tor and the crypto community. He claims that these technologies were developed from the beginning with surveillance in mind, and that their origins are tangled up with counterinsurgency research in the Third World. This leads him to a damning conclusion: “The Internet was developed as a weapon and remains a weapon today.”

VIDEO: Investigative journalist Yasha Levine shares observations to help us gain perspective on this system we take for granted, revealing the for-profit surveillance businesses operated within Silicon Valley and the military origins of the platforms and tools we use every day. Levine offers findings from his book Surveillance Valley: The Secret Military History of the Internet, tracing the history of this modern commodity back to its beginnings as a Vietnam-era military computer networking project for spying on guerrilla fighters and anti-war protesters. His insight offers us an opportunity to reframe this multinational communication tool as a global system of surveillance and prediction. Levine explores how the same military objectives that drove the development of early internet technology are still at the heart of Silicon Valley today—and invites us to reconsider what we know about the most powerful, ubiquitous tool ever created.

Researchers at any one of the four nodes could share information, and could operate any one of the other machines remotely, over the new network. (Actually, former ARPA head Charles Herzfeld says that distributing computing power over a network, rather than creating a secure military command-and-control system, was the ARPANET’s original goal, though this is a minority view.) Al Gore was not present!

By 1972, the number of host computers connected to the ARPANET had increased to 37. Because it was so easy to send and retrieve data, within a few years the ARPANET became less a network for shared computing than a high-speed, federally subsidized, electronic post office. The main traffic on the ARPANET was not long-distance computing, but news and personal messages.

In 1972, BBN’s Ray Tomlinson introduces network email as the Internetworking Working Group (INWG) forms to address the need for establishing standard protocols.

But Arpanet had a problem: it wasn’t mobile. The computers on Arpanet were gigantic by today’s standards, and they communicated over fixed links. That might work for researchers, who could sit at a terminal in Cambridge or Menlo Park – but it did little for soldiers deployed deep in enemy territory. For Arpanet to be useful to forces in the field, it had to be accessible anywhere in the world.

Picture a jeep in the jungles of Zaire, or a B-52 miles above North Vietnam. Then imagine these as nodes in a wireless network linked to another network of powerful computers thousands of miles away. This is the dream of a networked military using computing power to defeat the Soviet Union and its allies. This is the dream that produced the internet.

Making this dream a reality required doing two things. The first was building a wireless network that could relay packets of data among the widely dispersed cogs of the US military machine by radio or satellite. The second was connecting those wireless networks to the wired network of Arpanet, so that multimillion-dollar mainframes could serve soldiers in combat. “Internetworking,” the scientists called it.

Internetworking is the problem the internet was invented to solve. It presented enormous challenges. Getting computers to talk to one another – networking – had been hard enough. But getting networks to talk to one another – internetworking – posed a whole new set of difficulties, because the networks spoke alien and incompatible dialects. Trying to move data from one to another was like writing a letter in Mandarin to someone who only knows Hungarian and hoping to be understood. It didn’t work.

In response, the architects of the internet developed a kind of digital Esperanto: a common language that enabled data to travel across any network. In 1974, two Arpa researchers named Robert Kahn and Vint Cerf (the duo said by many to be the Fathers of the Internet) published an early blueprint. Drawing on conversations happening throughout the international networking community, they sketched a design for “a simple but very flexible protocol”: a universal set of rules for how computers should communicate.

These rules had to strike a very delicate balance. On the one hand, they needed to be strict enough to ensure the reliable transmission of data. On the other, they needed to be loose enough to accommodate all of the different ways that data might be transmitted.

“It had to be future-proof,” Cerf tells me. You couldn’t write the protocol for one point in time, because it would soon become obsolete. The military would keep innovating. They would keep building new networks and new technologies. The protocol had to keep pace: it had to work across “an arbitrarily large number of distinct and potentially non-interoperable packet switched networks,” Cerf says – including ones that hadn’t been invented yet. This feature would make the system not only future-proof, but potentially infinite. If the rules were robust enough, the “ensemble of networks” could grow indefinitely, assimilating any and all digital forms into its sprawling multithreaded mesh.

Eventually, these rules became the lingua franca of the internet. But first, they needed to be implemented and tweaked and tested – over and over and over again. There was nothing inevitable about the internet getting built. It seemed like a ludicrous idea to many, even among those who were building it. The scale, the ambition – the internet was a skyscraper and nobody had ever seen anything more than a few stories tall. Even with a firehose of cold war military cash behind it, the internet looked like a long shot.

In 1973, Global networking becomes a reality as the University College of London (England) and Royal Radar Establishment (Norway) connect to ARPANET. The term Internet is born. A year later, the first Internet Service Provider (ISP) is born with the introduction of a commercial version of ARPANET, known as Telenet.

Then, in the summer of 1976, it started working.

If you had walked into Rossotti’s beer garden on 27 August 1976, you would have seen the following: seven men and one woman at a table, hovering around a computer terminal, the woman typing. A pair of cables ran from the terminal to the parking lot, disappearing into a big grey van.

Inside the van were machines that transformed the words being typed on the terminal into packets of data. An antenna on the van’s roof then transmitted these packets as radio signals. These signals radiated through the air to a repeater on a nearby mountain top, where they were amplified and rebroadcast. With this extra boost, they could make it all the way to Menlo Park, where an antenna at an office building received them.

It was here that the real magic began. Inside the office building, the incoming packets passed seamlessly from one network to another: from the packet radio network to Arpanet. To make this jump, the packets had to undergo a subtle metamorphosis. They had to change their form without changing their content. Think about water: it can be vapor, liquid or ice, but its chemical composition remains the same. This miraculous flexibility is a feature of the natural universe – which is lucky, because life depends on it.

The flexibility that the internet depends on, by contrast, had to be engineered. And on that day in August, it enabled packets that had only existed as radio signals in a wireless network to become electrical signals in the wired network of Arpanet. Remarkably, this transformation preserved the data perfectly. The packets remained completely intact.

So intact, in fact, that they could travel another 3,000 miles to a computer in Boston and be reassembled into exactly the same message that was typed into the terminal at Rossotti’s. Powering this internetwork odyssey was the new protocol cooked up by Kahn and Cerf. Two networks had become one. The internet worked.

“There weren’t balloons or anything like that,” Don Nielson tells me. Now in his 80s, Nielson led the experiment at Rossotti’s on behalf of the Stanford Research Institute (SRI), a major Arpa contractor. Tall and soft-spoken, he is relentlessly modest; seldom has someone had a better excuse for bragging and less of a desire to indulge in it. We are sitting in the living room of his Palo Alto home, four miles from Google, nine from Facebook, and at no point does he even partly take credit for creating the technology that made these extravagantly profitable corporations possible.

1976: Queen Elizabeth II hits the “send button” on her first email.

The internet was a group effort, Nielson insists. SRI was only one of many organizations working on it. Perhaps that’s why they didn’t feel comfortable popping bottles of champagne at Rossotti’s – by claiming too much glory for one team, it would’ve violated the collaborative spirit of the international networking community. Or maybe they just didn’t have the time. Dave Retz, one of the researchers at Rossotti’s, says they were too worried about getting the experiment to work – and then when it did, too worried about whatever came next. There was always more to accomplish: as soon as they’d stitched two networks together, they started working on three – which they achieved a little over a year later, in November 1977.

Over time, the memory of Rossotti’s receded. Nielson himself had forgotten about it until a reporter reminded him 20 years later. “I was sitting in my office one day,” he recalls, when the phone rang. The reporter on the other end had heard about the experiment at Rossotti’s, and wanted to know what it had to do with the birth of the internet. By 1996, Americans were having cybersex in AOL chatrooms and building hideous, seizure-inducing homepages on GeoCities. The internet had outgrown its military roots and gone mainstream, and people were becoming curious about its origins. So Nielson dug out a few old reports from his files, and started reflecting on how the internet began. “This thing is turning out to be a big deal,” he remembers thinking.

What made the internet a big deal is the feature Nielson’s team demonstrated that summer day at Rossotti’s: its flexibility. Forty years ago, the internet teleported thousands of words from the Bay Area to Boston over channels as dissimilar as radio waves and copper telephone lines. Today it bridges far greater distances, over an even wider variety of media. It ferries data among billions of devices, conveying our tweets and Tinder swipes across multiple networks in milliseconds.

The fact that we think of the internet as a world of its own, as a place we can be “in” or “on” – this too is the legacy of Don Nielson and his fellow scientists. By binding different networks together so seamlessly, they made the internet feel like a single space. Strictly speaking, this is an illusion. The internet is composed of many, many networks: when you go to Google’s website, your data must traverse a dozen different routers before it arrives. But the internet is a master weaver: it conceals its stitches extremely well. We’re left with the sensation of a boundless, borderless digital universe – cyberspace, as we used to call it. Forty years ago, this universe first flickered into existence in the foothills outside of Palo Alto, and has been expanding ever since.

As parts of the ARPANET were declassified, commercial networks began to be connected to it. Any type of computer using a particular communications standard, or “protocol,” was capable of sending and receiving information across the network. The design of these protocols was contracted out to private universities such as Stanford and the University of London, and was financed by a variety of federal agencies. The major thoroughfares or “trunk lines” continued to be financed by the Department of Defense.

1983: The Domain Name System (DNS) establishes the familiar .edu, .gov, .com, .mil, .org, .net, and .int system for naming websites. This is easier to remember than the previous designation for websites, such as 123.456.789.10.

By the early 1980s, private use of the ARPA communications protocol — what is now called “TCP/IP” — far exceeded military use. In 1984 the National Science Foundation assumed the responsibility of building and maintaining the trunk lines or “backbones.” (ARPANET formally expired in 1989; by that time hardly anybody noticed). The NSF’s Office of Advanced Computing financed the internet’s infrastructure from 1984 until 1994, when the backbones were privatized.

1984: William Gibson, author of “Neuromancer,” is the first to use the term “cyberspace.”

In short, both the design and implementation of the internet have relied almost exclusively on government dollars. The fact that its designers envisioned a packet-switching network has serious implications for how the internet actually works. For example, packet switching is a great technology for file transfers, email, and web browsing but not so good for real-time applications like video and audio feeds, and, to a lesser extent, server-based applications like webmail, Google Earth, SAP, PeopleSoft, and Google Spreadsheet.

Furthermore, without any mechanism for pricing individual packets, the network is overused, like any public good. Every packet is assigned an equal priority. A packet containing a surgeon’s diagnosis of an emergency medical procedure has exactly the same chance of getting through as a packet containing part of Coldplay’s latest single or an online gamer’s instruction to smite his foe.

Because the sender’s marginal cost of each transmission is effectively zero, the network is overused, and often congested. Like any essentially unowned resource, an open-ended packet-switching network suffers from what Garrett Hardin famously called the “Tragedy of the Commons.”

In no sense can we say that packet-switching is the “right” technology. One of my favorite quotes on this subject comes from the Netbook, a semi-official history of the internet:

“The current global computer network has been developed by scientists and researchers and users who were free of market forces. Because of the government oversight and subsidy of network development, these network pioneers were not under the time pressures or bottom-line restraints that dominate commercial ventures. Therefore, they could contribute the time and labor needed to make sure the problems were solved. And most were doing so to contribute to the networking community.”

In other words, the designers of the internet were “free” from the constraint that whatever they produced had to satisfy consumer wants.

We must be very careful not to describe the internet as a “private” technology, a spontaneous order, or a shining example of capitalistic ingenuity. It is none of these. Of course, almost all of the internet’s current applications — unforeseen by its original designers — have been developed in the private sector. (Unfortunately, the original web and the web browser are not among them, having been designed by the state-funded European Laboratory for Particle Physics (CERN) and the University of Illinois’s NCSA.)

The World Wide Web wasn’t created until 1989, 20 years after the first “Internet” connection was established and the first message sent.

1990: Tim Berners-Lee, a scientist at CERN, the European Organization for Nuclear Research, develops HyperText Markup Language (HTML). This technology continues to have a large impact on how we navigate and view the Internet today.

1991: CERN introduces the World Wide Web to the public.

1992: The first audio and video are distributed over the Internet. The phrase “surfing the Internet” is popularized.

And today’s internet would be impossible without the heroic efforts at Xerox PARC and Apple to develop a useable graphical user interface (GUI), a lightweight and durable mouse, and the Ethernet protocol. Still, none of these would have been viable without the huge investment of public dollars that brought the network into existence in the first place.

Now, it is easy to admire the technology of the internet. I marvel at it every day. But technological value is not the same as economic value. That can only be determined by the free choice of consumers to buy or not to buy. The ARPANET may well have been technologically superior to any commercial networks that existed at the time, just as Betamax may have been technologically superior to VHS, the MacOS to MS-DOS, and Dvorak to QWERTY. (Actually Dvorak wasn’t.) But the products and features valued by engineers are not always the same as those valued by consumers. Markets select for economic superiority, not technological superiority (even in the presence of nefarious “network effects,” as shown convincingly by Liebowitz and Margolis).

Libertarian internet enthusiasts tend to forget the fallacy of the broken window. We see the internet. We see its uses. We see the benefits it brings. We surf the web and check our email and download our music. But we will never see the technologies that weren’t developed because the resources that would have been used to develop them were confiscated by the Defense Department and given to Stanford engineers. Likewise, I may admire the majesty and grandeur of an Egyptian pyramid, a TVA dam, or a Saturn V rocket, but it doesn’t follow that I think they should have been created, let alone at taxpayer expense.

What kind of global computer network would the market have selected? We can only guess. Maybe it would be more like the commercial online networks such as Comcast or MSN, or the private bulletin boards of the 1980s. Most likely, it would use some kind of pricing schedule, where different charges would be assessed for different types of transmissions.

The whole idea of pricing the internet as a scarce resource — and bandwidth is, given current technology, scarce, though we usually don’t notice this — is ignored in most proposals to legislate network neutrality, a form of “network socialism” that can only stymie the internet’s continued growth and development. The net neutrality debate takes place in the shadow of government intervention. So too the debate over the division of the spectrum for wireless transmission. Any resource the government controls will be allocated based on political priorities.

Let us conclude: yes, the government was the founder of the internet. As a result, we are left with a panoply of lingering inefficiencies, misallocations, abuses, and political favoritism. In other words, government involvement accounts for the internet’s continuing problems, while the market should get the credit for its glories.

Continued on next page…

Defense Advanced Research Projects Agency

An agency of the US Dept. of Defense that was formed in 1958 (as ARPA), the scientific vanguard of the Deep State, and the world’s pioneer of advanced technology, bringing science fiction to life. With an on-the-record annual budget of close to the $3 billion, their role in our world seems to be to absorb any and all human innovation and re-direct it toward the art of killing and controlling people. A simple internet search of ‘DARPA’ reveals an endless array of incredibly cool and creepy science projects, and public science competitions designed to discover and recruit the greatest upcoming scientific minds the human race produces. (source)

One of the best ways to describe the U.S. Department of Defense and DARPA is to use the Star Wars moniker “Evil Empire” to describe its international, secretive agenda to take over the world and turn all of us into neo-feudal slaves who have no choice but to give allegiance to the controllers of the universe—Darth Vader and the Sith Lords.

DARPA “Vader” has controlled technological innovation since 1958 when it was created by the military-driven Evil Empire, a/k/a Department of Defense. Every emerging technology from the Evil Empire leads humanity into the science fiction fate of machines controlling humans, much like Darth Vader became a weapon that was half man and half machine. There is much wisdom and truth that lies behind the first Star Wars movie, and its story was given to humanity as a warning of how the Evil Empire and its Death Star planned to destroy the entire planet.

Folks, this is no longer science fiction. This battle for Earth is going on now and we, like the Rebel Alliance, must join forces to destroy the imperial forces of Darth Vader.

Aside: Do you know the real back story of Star Wars? If not, see Star Wars: The Secret Weapon and Why George Lucas has Kept It Hidden.

Folks at the DoD should read science fiction so that they can see how their actions—individually and collectively–are leading humanity into the evil destiny commonly found in sci-fi: robot wars, cyborgs turning against humans, computer take-over of the world, endless weapons, and, you guessed it – the Death Star. DARPA is the father of war-fighting, both conventional and digital. And as we all know, Darth Vader took his orders from the Evil Emperor. DARPA Vader is controlled by a man they called “Yoda,” but is in reality the Evil Emperor who leads a group called the Highlands Forum (Evil Empire Imperial Command) that directs all military research and development. 

Who is this Evil Emperor who looks as innocent as Yoda? His name is Andrew Marshall and for decades this single man has driven the Highland Forum (Evil Empire) into creating weapons of every sort that have been released into the corporate world and now control your laptop, phone, computer, and every other device with a microprocessor inside – especially the “Evil Intel Empire Inside.”

Intel Inside is found in our private digital devices and is, in fact, collecting “Intelligence” for the Department of Defense, CIA, and NSA who are all members of the Evil Empire. Yes, the one they call “Yoda” is actually the Evil Emperor in disguise who controls DARPA Vader and the evil Sith lords of war. He does not have lightning bolts shooting out of his fingers – unless of course he uses one of his many DARPA inventions to do so.

Continue ‘DARPA Vader and the Evil Intel Empire Inside’ by Anonymous Patriots at The Millennial Report

Below: Timothy Alberino joins David Knight to talk about the military origins of new transformative technologies like genetics, artificial intelligence & robots — and how globalists seek to use these technologies to transform humanity into a posthuman future.

Below: James Corbett Special Report exposing DARPA:

Chronological History of Events Related to DARPA

DARPA Launches Project CHARIOT in Bid to Protect Big Tech Profits / Give Backdoor Access to IOT

DARPA Launches Project CHARIOT in Bid to Protect Big Tech Profits / Give Backdoor Access to IOT

ARPA announces a new type of cryptography to protect the Big Tech firm profits from the dawn of quantum computers and allow backdoor access into 3 trillion internet-connected devices. by Raul Diego The U.S. Military-Industrial complex is sprinting on a chariot to shore up the encryption space before the next era of computation upends the entire digital edifice built on semiconductors and transistors. But, the core ...
Read More
Peruvian Doctor claims Recent Advancements in Brain Chip Technology Could be due to “Secret, Forced, and Illicit Human Experimentation” by Big Tech & Rogue Government

Peruvian Doctor claims Recent Advancements in Brain Chip Technology Could be due to “Secret, Forced, and Illicit Human Experimentation” by Big Tech & Rogue Government

A doctor from Peru claims that recent advancements in brain chip technology could be due to “secret, forced, and illicit human experimentation” by a consortium of transnational tech companies and governments operating outside of the law. A 2016 paper published in the Egyptian Journal of Internal Medicine warned that “secret, forced, and illicit human experimentation” could be happening in Latin America, where poverty stricken masses are ...
Read More
Big Tech & Big Brother meet at Facebook HQ to discuss how to ‘secure’ US elections

Big Tech & Big Brother meet at Facebook HQ to discuss how to ‘secure’ US elections

Security teams for Facebook, Google, Twitter and Microsoft met with the FBI, the Department of Homeland Security and the Director of National Intelligence’s office to coordinate a strategy to secure the 2020 elections. The tech platforms met with government officials at Facebook’s Menlo Park headquarters on Wednesday, the company has confirmed, boasting that Big Tech and Big Brother have developed a “comprehensive strategy” to get control ...
Read More
Scathing Report Accuses the Pentagon of Developing an Agricultural Bioweapon

Scathing Report Accuses the Pentagon of Developing an Agricultural Bioweapon

A new technology in which insects are used to genetically modify crops could be converted into a dangerous, and possibly illegal, bioweapon, alleges a Science Policy Forum report released today. Naturally, the organization leading the research says it’s doing nothing of the sort. The report is a response to a ongoing research program funded by the U.S. Defense Advanced Research Projects Agency (DARPA). Dubbed “Insect Allies,” ...
Read More
Whistleblower: DARPA, CIA Scientist and Engineer Dr. Robert Duncan Speaks at The Bases Project Conference, Exposes Mind Control Techniques

Whistleblower: DARPA, CIA Scientist and Engineer Dr. Robert Duncan Speaks at The Bases Project Conference, Exposes Mind Control Techniques

I will begin by saying, we are living in a time where the dangers of Military technology is now beyond our control. We live as guinea pigs in a world where those with the power, knowledge and technology keep us from our true freedom. To live with free will in this beautiful world. They manipulate us, direct us and destroy everything that was truly divine and ...
Read More