DNA 101: How It Works and Why It's Astounding

An Important Discovery

One of the most important discoveries in modern biology occurred in the 1950s during a period that historians call the molecular biological revolution. We all know about Watson and Crick. In 1953, they elucidate the structure of the DNA molecule, the beautiful double helix structure.

Four years later, Francis Crick—who was a code breaker in World War II—cracks the ultimate code. He realizes that along the spine of the double helix molecule, there are these four chemical subunits called nucleotide bases.

He postulates that the nucleotide bases are functioning like alphabetic characters in a written language or the digital characters that we use in computer code today, like 0s and 1s, which is to say that the nucleotide bases don’t perform a biological function in virtue of their shape or of the reactions that they are part of. It’s not their chemical or geometric properties that give them their function, it’s their sequential arrangement in an independent symbol convention later discovered, and now known as the genetic code.

So, we have genetic text interpreted by genetic code inside the cell. And Crick postulates that the information along the DNA molecule inscribed in essentially digital or alphabetic or typographic form is directing the synthesis of the crucial protein molecules that are needed to keep cells alive.

If you want to give your computer a new function, you’ve got to give it new code, new software. The same thing turns out to be true in life.

Crick’s Hypothesis Catches On

It takes about eight years for his conjecture or hypothesis to be confirmed. It’s not something that could be confirmed by one simple experiment, but by 1965, biologists on both sides of the Atlantic, in France, the UK, and the US have figured out that Crick is pretty much right.

And this is a mind-blowing, stop-press moment in the history of biology because we essentially have what engineers now call CAD/CAM or computer-aided design and manufacturing. If you go down the street from us here to the Boeing plant, you’ll find an engineer sitting at consoles, writing code that is used to direct the construction of say, an airplane wing, where the code directs where the rivets are placed on the wing.

The same kind of technology and information technology is at work in every cell of every living organism. The information in the DNA molecule is directing the construction of the proteins and protein machines that keep cells and organisms alive.

Origin of Information

So, the ultimate question in the origin of life is the question of where that information comes from. Both in chemical evolutionary theory—where you’re trying to explain the origin of the first cell—but also in biological evolutionary theory, where we’re trying to account for the origins of new forms of life from simpler preexisting forms.

If you want to give your computer a new function, you’ve got to give it new code, new software. The same thing turns out to be true in life. If you want to build a new form of life, you have to have new types of cells, new cells require new proteins, new proteins require new genetic information.

This is what Neo-Darwinism really has a hard time explaining because if you rely on a random method of altering preexisting code (which is what a mutation is), you’re overwhelmingly more likely to degrade that information before you ever get to any new functional sequence or new information that’s capable of building a new protein.

When I ask this hypothetically to computer programs, say, if you take a section of code, and you start randomly changing the 0s and 1s, are you going to get a new operating system, a new program, or are you going to introduce bugs and glitches before you get there?

They get it right away. Random changes in functional sequences of information is invariably destructive, and it’s not a viable, credible, or plausible mechanism for generating new forms of information, and yet that’s what Neo-Darwinism must repair to in this age of molecular biology. It’s one of the reasons that the mechanism is seen as inadequate.

What Is Extended Synthesis?

Extended synthesis is the name for a whole range of new proposals that have been made by leading evolutionary biologists, who, on the one hand recognize that Neo-Darwinism is inadequate, that the mutation/natural selection mechanism does not generate new form and new biological information.

On the other hand are biologists who want to hold onto a strictly materialist or naturalistic approach to solving a problem. So, they’re proposing other materialistic evolutionary mechanisms to try to supplement the perceived (and increasingly agreed upon) limitations or liabilities of the mutation selection mechanism.

So, the extended synthesis terminology is a variation on the old idea that Neo-Darwinism was a synthesis of mendelian genetics and classical Darwinian theory. That, in the 1930s and 40s was called the new synthesis.

So, the extended synthesis is new evolutionary mechanisms that’s being kind of bolted onto the old Neo-Darwinian theory.

Some of the proposals are really quite interesting and that they definitely describe biological mechanisms to which Neo-Darwinism has given short shrift. They’re not things that Darwinists talk about much.

The Holes in Extended Synthesis

The problem we found with each of these new proposals is that they don’t account for the origin of the information necessary to build new forms of life either. Typically they don’t address the problem of biological information or they will end up begging the question. They’ll explain the origin of new form by presupposing preexisting unexplained sources of information.

As an example, there’s a brilliant scientist at the University of Chicago named James Shapiro. He gave one of the best talks at the conference in November of 2016 at the Royal Society in London. He has a new model he calls natural genetic engineering.

What he points out is that the mutations that we see taking place in nature, very often, are not random at all. They’re not random with respect to the survival needs of the organism. Rather, they are, in some way, produced by guided mechanisms that reflect a kind of preprogrammed adaptive capacity. So, if an organism is under some sort of environmental stress, that triggers a response that either expresses some preexisting genetic information or ramps up mutational processes in certain designated parts of the genome.

So, these are not random responses at all. They under what he calls algorithmic control. So, you have a preprogrammed adaptive responsive to an environmental stress that produces an evolutionary change that allows the organism to survive under certain conditions. This is very interesting biology.

But, there’s a big question that Shapiro doesn’t address, and that is where does the the preprogramming come from?

Because that original preprogramming implies that there’s an information endowment. An algorithm is a computer program. So, there’s information already in the cell that’s telling the cell how to respond to these environmental stressors, and what we want to know is where does that information come from?

Shapiro doesn’t answer that question, therefore his model—as interesting as it is, and as accurate as it is in describing some biological processes that have been overlooked—it does not answer the ultimate origins question, particularly the question of the origin of information.

Each of the models in the new synthesis—new mechanisms—have that kind of a problem. They describe interesting biology, but presuppose the origin of the information necessary for those processes to occur.

Related Articles

Related Resources

Crossway is a not-for-profit Christian ministry that exists solely for the purpose of proclaiming the gospel through publishing gospel-centered, Bible-centered content. Learn more or donate today at crossway.org/about.