Divining Design

A Review of The Design Inference: Eliminating Chance Through Small Probabilities by William A. Dembski Cambridge University Press, 1998.

by Royal Truman on November 1, 1999

Originally published in Journal of Creation 13, no 2 (November 1999): 34-39.

Throughout Church history, a popular argument for the existence of God has been based on the appearance of designed structures in nature. William Paley pointed out in Natural Theology (published in 1802)1 that should one happen upon an unknown object, such as a watch, and analyse what it does, we would attribute its origin to an intelligent maker. It would be implausible that the individual, highly precise components should have been so arranged by chance.

By analogy, the existence of complex parts working together to perform some useful function should allow us to infer an intelligent maker in other contexts. Humans also create works of art and engineering, and so surely are qualified to recognize underlying intelligence in objects humans did not make.

Those determined to exclude God from the universe have always attempted to discredit these kinds of arguments. We are told that we recognize that a watch was made because we know about human watchmakers. Dawkins, in his book The Blind Watchmaker, pursues another line of argument common in present evolutionary thinking. It states simply that complex systems developed stepwise over large periods of time, totally unguided, each step having some survival advantage, eventually producing by chance living objects even more complex than watches.

Thomas Huxley argued that, given enough time, a team of monkeys typing random letters on typewriters would with enough time produce Shakespeare’s works.2,3

How reasonable are these objections? Is it possible to identify some events or objects as designed and not the result of chance?

Honed by a Ph.D. in mathematics (University of Chicago), another Ph.D. in philosophy of science (University of Illinois) and an M.Div. from Princeton Theological Seminary, Dr Dembski has tried to show that intelligent design can in principle be identified, and that this can be done in a mathematically rigorous way.

This is not a religious book in any manner. No explicit attempt is made to apply any conclusions neither to any living organism nor to the creation/evolution controversy. The point is quite simply that should designed structures or events exist they are detectable. Now, Dembski’s intention and personal beliefs can easily be determined from his other writings.4 In The Creation Hypothesis5 he writes,

‘I shall demonstrate that the world is sufficiently fine-grained to produce events for which design is a compelling inference.’6

and later:

‘If ours is a world where God exists and actually does things that he intends us to know about, then naturalism prevents us from obtaining such knowledge. Methodological naturalism is therefore itself, to use Barbour’s phrase, “scientifically stultifying.”’7

With methodological naturalism as the basis of evolutionary thinking, ‘truth’ is by stipulative definition the best theory available that does not invoke God’s involvement in any manner. Many have pointed out that scientists should seek the best explanation, whether this is restricted to only mechanistic principles or not.

The reason why one must sometimes expose evolutionists gently to growing doubts was pointed out clearly by Dr Wells in an essay found in Mere Creation:

‘Kuhn aroused the ire of many scientists when he argued that paradigms have philosophical and psychological components and are not readily discarded in the face of anomalous evidence.’8

The French mathematician Emile Borel coined a principle known as the Single Law of Chance, and formulated it as follows: ‘Phenomena with very small probabilities do not occur’ (p. 3). This argument was not fully developed since events of low probabilities actually could occur by chance. One can toss a large number of coins, register the number of ‘heads’ and ‘tails’, and then claim post facto a highly unlikely combination occurred. In fact, any of the other possible outcomes are equally unlikely to be repeated in a subsequent attempt.

Dembski’s key insight, which will be elaborated on below, is stated succinctly as, Specified events of small probability do not occur by chance (p. 5).

In statistics there is the notion of hypothesis testing. A hypothesis H0 might be: ‘On average women and men have equally long life spans.’ Since a study to test this proposal can not include all women and men who have ever lived, some assumptions must be made about samples collected for both sexes. By assuming some probability distribution for the value (women’s average lifespan—men’s average lifespan), one could calculate the probability of obtaining just by chance the difference in average age found, with the variation that is found within each group, and, for the number of people in each group. If that probability is very small, then one rejects the original hypothesis that men and women have the same average longevity, and assume that the difference found is due to some particular cause(s). Unfortunately, many alternative theories are now still possible: perhaps women live, on average, two years longer than men; or six years longer than men; or men and women who never smoked live equally long; or before medical advances in child-bearing improved, women lived half as long as men.

Identification of design relies on a different concept for an event which is specified, either in advance or post-facto by comparing to a suitable pattern:

‘Thus, whereas statistical hypothesis testing eliminates chance because divergence from mathematical expectation is too great, the design inference eliminates chance because the fit with mathematical expectation is too close. We may therefore think of design and chance as competing modes of explanation for which design prevails once chance is exhausted’ (p. 8).

An example (pp 9–20) is offered to illustrate the issues which are involved (New York Times, 23 July 1985, p. B1):

‘TRENTON, July 22—The New Jersey Supreme Court today caught up with the “man with the golden arm”, Nicholas Caputo, the Essex County Clerk and a Democrat who has conducted drawings for decades that have given Democrats the top ballot line in the country 40 out of 41 times … the court noted that the chances of picking the same name 40 out of 41 times were less than 1 in 50 billion.’9

Many evolutionists dismiss creationist probability arguments by saying: ‘So what, some sequence or other had to result.’ But key to Dembski’s analysis, exemplified by this example, is not the low probability alone; after all, any particular random sequence of 41 outcomes is also highly improbable. The proof is, try to duplicate the same series a second time. Critical are two conditions: both a small probability and that an event be specified.10

It is common knowledge that the first position on the ballot has the highest chance of being selected, all else being constant. Caputo knew this, had the opportunity to decide the positions of the political parties, and as a Democrat (D) wished to see his party win. This makes an unusually high proportion of Ds in the first position as an outcome of recognizable significance, clearly an identifiable pattern. Coupled with the miniscule probability of such a sequence arising by chance, it’s no wonder that the New Jersey Supreme Court said:

‘Confronted with these odds, few persons of reason will accept the explanation of blind chance’ (cited on p. 19).

While cheating

‘certainly is the best explanation of Caputo’s golden arm … the court stopped short of convicting Caputo, … [because] the court had no clear mandate for dealing with highly improbably ballot line selections’ (p. 19).

It would be easy to define several extreme cases of cheating patterns on the part of Caputo in advance. They all would show a very high number of Ds in the first ballot position. Such patterns are ‘detachable’ or independent of an event (p. 14).

It is important to understand the notion of specified events. Shooting an arrow and then drawing a bull’s eyes around it, 100 times in a row, is a pattern called a fabrication: there is nothing unusual about where the arrows landed. Hitting a fixed bull’s eye 100 times in a row from a considerable distance suggests something entirely different (p. 13).

It is common belief that certain patterns identify an intelligent cause. Dembski points out that whole industries are based on this concept, such as: intellectual rights protection; forensic science; data falsification in science; cryptography, and insurance. If three houses owned by the same person should all burn down within a short period of time (low probability) and all be shown to be insured beyond their true value (specification), then fraud can be assigned.

Dembski elaborates:

‘we need to understand what it is about intelligent agents that reveals their activity. The principal characteristic of intelligent agency is directed contingency, or what we call choice (p. 62).

The actualization of one among several competing possibilities, the exclusion of the rest, and the specification of the possibility that was actualized encapsulate how we recognize intelligent agents. … Exclusion establishes that there was genuine contingency (i.e., that there were other live possibilities, and that these were ruled out)’ (p. 63).

A good example of this can be found in the exclusively ‘left-handed’ amino acids found in proteins (coded by genes in DNA built with exclusively ‘right-handed’ sugars), even though synthesizing amino acids in a laboratory produces a 50/50 mixture of left- and right-handed forms (a racemate). In living organisms, not only are hundreds of amino acids which compose an average size protein exclusively ‘left-handed’, they have also managed to avoid all the other kinds of non-peptide reactions amino acids would have undergone in a hypothetical ‘primordial soup’.11 A racemate is worthless in building enzymes and other biological materials. Therefore the useful outcomes can be specified. Of the astronomically large number of possible reaction products, a minuscule subset is purposefully generated in living cells.

Dembski illustrates the principle with this example:

‘to recognize whether a rat has successfully learned how to traverse a maze, a psychologist must first specify the sequence of right and left turns that conducts the rat out of the maze’ (p. 64).

The number of turns must be large enough to exclude that in a large number of attempts one rat may just have been lucky.

In the SETI program (Search for Extraterrestrial Intelligence), patterns are looked for among radio-wavelength signals from outer space. Some patterns, such as a large series of prime numbers in ascending order, have been specified as examples that would demonstrate an intelligent sender. How might other patterns of 0s and 1s be distinguished from random series?

‘In the 1960s, the Russian probabilist Andrei Kolmogorov investigated what makes a sequence of coin flips random. … What Kolmogorov said was that a string of 0s and 1s becomes increasingly random as the shortest computer program that generates the string increases in length’(p. 32)

A series of a hundred 1s can be described by the program ‘repeat "1" a hundred times.’ A slightly more random series might be encoded as, ‘repeat "1" fifty times, then repeat “0” fifty times’ (p. 33). Other truly random series can only be most efficiently represented by a command such as ‘copy “11000110101110010000011 …”’

The explanatory filter

To assign regularity, chance or design to an event, Dembski proposes one try to explain on the basis of these three possibilities and in that order.

(a) If an outcome is deterministic or has a high probability of occurring and thus can be explained by a natural law, then regularity should be assumed. This is not to say God does not lie behind the scenes ultimately as the Lawgiver, but such an explanation would be based on non-observational criteria.

‘For the filter to eliminate regularity, one must establish that a multiplicity of possibilities is compatible with the given antecedent circumstance (recall that regularity admits only one possible consequence for a given antecedent circumstance); hence to eliminate regularity is to establish a multiplicity of possible consequences’ (p. 65).

(b) If regularity as an explanation fails, one should then see if chance is an acceptable explanation. These are events of intermediate probability, ‘the events we reasonably expect to occur by chance in the ordinary circumstances of life.

(c) Only once chance has been excluded is design assumed to be the cause. These events are characterized by patterns that are both specified and of vanishingly small probabilities. This approach is conservative in that ‘past specifications will continue to be specifications, though past fabrications (i.e., patterns that in the past failed to count as specifications) may because of improvements in technology now become specifications’ (p. 161).

A seemingly random pattern may be discovered later to contain information. In a practical sense, biological observations, such as ‘junk DNA’ may very well be found in the future to have a use, just as functions have been found for previously classified ‘vestigial organs’.12,13

These three alternatives are complete and mutually exclusive.

‘The design inference, on the other hand, eliminates chance in the global sense of closing the door to every relevant chance explanation’ (p. 42).

It must be pointed out that judging probabilities requires some background information that accounts for how the event E could have arisen. Seeing some coins lying on a table, with no knowledge of their history, does not allow strong statements to be made, compared to the case of observing coins being flipped and allowed to be dropped. Low probabilities are assigned on the basis of what we know, from experience and scientific experimentation. I suggest that we do have good reasons to be sceptical of a claim that an oil painting of Queen Elizabeth II resulted as a tram full of paint cans derailed in front of Buckingham Palace.

Dembski illustrated in some detail the completeness and logical coherence of the three alternatives using first-order predicate logic in section 2.2.14 Following the discussion requires some background in De Morgan’s rules.15 For the mathematically sophisticated, this level of rigor mostly ensures one is not being careless in how the arguments are presented. It has the added advantage of forcing one to identify which of the logical components one disagrees with if the conclusion is not acceptable. In the case of explaining how life arose on earth, the design inference takes on the following form (p. 56):

Premise 1: LIFE has occurred.
Premise 2: LIFE is specified.
Premise 3: If LIFE is due to chance, then LIFE has small probability.
Premise 4: Specified events of small probability do not occur by chance.
Premise 5: LIFE is not due to a regularity.
Premise 6: LIFE is due to regularity, chance, or design.
Conclusion: LIFE is due to design.

Life can be specified, for example by characteristics such as reproduction and a genetic code.

Dembski shows that the well-known evolutionist Richard Dawkins, through his writings, would accept premises 1,2,4,5,6 and is left with 3. Somehow LIFE must be claimed to be not all that improbable. The odds against obtaining the right molecular arrangements to support life are astronomical. Dawkins and others have argued there ‘must’ be many planets suitable for life to develop, so that although unlikely for an individual one, somewhere life would appear.

I find it ironic that many evolutionists who would use the same argument as Dawkins would have us believe there could be life on Mars. Surely the odds of life developing on both planets would be even smaller than for the earth alone. Would such a finding not destroy the preceding argument? However, one can see how the stories would go: if no life is found on other planets, we will be informed: ‘You see, I told you how unlikely it is. The earth just happened to be the lucky one.’ On the other hand, if life would be found, we will hear: ‘You see, it is no big deal. Life can pop up anywhere. No need for a God.’

Dembski observes:

‘Advocates of the Anthropic Principle like Barrow and Tipler[16,17] posit an ensemble of universes so that LIFE, though highly improbable in our own little universe, is nevertheless virtually certain to have arisen at least once …’ (p. 60).

Dembski, as others, regards explanations for which no evidence exists nor by definition could ever be found, as a dishonest way to avoid facing an issue squarely.

I am not convinced that invoking a huge number of non-interacting universes is even in principle a scientific argument, i.e., a phenomenon that would make an unlikely event in our universe less improbable ‘since it had to happen somewhere.’ There would still have to be a super ‘privileged frame’ of reference that includes the individual universes. Isolated, non-interacting times do not allow a common basis of probabilities.

Could I be right? It is a statistical principle that past outcomes of independent events do not affect the probability of the next outcome. The chances of a fair coin being flipped 8 times in a row and showing only ‘heads’ can be stated in advance as being low (2–8). But if 7 ‘heads’ have already come up in a series, and then we ask at this point what the odds are of getting a ‘heads’, it is still 50:50 for fair experiments. The fact that 8 ‘heads’ in a row specified a priori is small does not change the probability of an eighth head showing up if the specification is stated after 7 ‘heads’ have appeared. Even if a million previous universes had failed to generate life, that would not improve the chances for our own once our own time reference began. But again, the whole thought experiment assumes an external reality that encompasses all universes, which allows a common basis for time and probabilistic rules to hold.

To understand what low probability means, Dembski writes:

‘All that the design inference requires of a probabilistic apparatus is that it assign probabilities to events … always the probability of an event in relation to certain background information’ (p. 69)

Unlike the Bayesian approach, in which additional information can increase the probability that some hypothesis is indeed correct, the design theorist is in the business of trying to exclude the hypothesis that chance can be an explanation. This requires a good understanding of probability theory. A key skill is to be able compute the number of alternative permutations consistent with the relevant causal factors.

Most people cannot estimate probabilistic values very accurately. A classical example is the ‘birthday question’: how likely would it be to find two people with a birthday on the same day for a random sample of 30 people? Most would say intuitively, quite unlikely.18 However, ‘Again the numerical consequences are astounding. Thus for r=23 people we have p < ½, that is, for 23 people the probability that at least two people have a common birthday exceeds ½’ (p. 76).

The reason is that all paired comparisons between everyone in the sample must be made and not only for a single person.

Which patterns are truly generated by random factors are not so easily determined by the untrained:

‘A standard trick of statistics professors in teaching introductory statistics is to have half the students in a class each flip a coin 100 times, recording the sequence of heads and tails on a slip of paper, and then have each student in the other half as a purely mental act mimic a sequence of 100 coin tosses. … The statistics professor simply looks for a repetition of six or seven heads or tails in a row to distinguish the truly random. … In a hundred coin flips one is quite likely to see six or seven such repetitions. … As a matter of human psychology people expect that one toss will differ from the next around seventy percent of the time’ (p. 138).

A rigorous analysis of the likelihood that an event will occur given background information assumes that the information has been as fully and effectively used as possible (p. 78). However, ‘There is no algorithm that for every event-information pair invariably outputs the right probability’ (p. 85). I should point out that many statements one comes across, such as ‘With enough time anything is possible’, or ‘After enough random trials, evolution eventually came up with a functional knee’ are inevitably never accompanied with any mathematical calculations.

Complexity theory, which analyses the difficulty to solve a problem with available resources, was shown to be related to the notion of low probabilities. In a simplified form, it can be stated: if using resources like a huge number of computers working together with algorithms of maximum effectiveness would require a vast amount of time to (possibly) solve a problem, then this goal is both complex, difficult and has a lower probability of being solved (chapter 4). If all theoretically possible resources would not be sufficient, the problem is called intractable.

Is this relevant to the question of design? Certainly, since the universe is not infinitely old, the resources available to solve creation of life problems are limited. Dembski writes:

‘The solutions to mathematical problems are widely held to be noncontingent since mathematical propositions are regarded as necessarily true or false. Nevertheless, the capacity of rational agents to solve mathematical problems is contingent, depending on the resources available to these agents’ (p. 128).

Dembski introduces the concept of probabilistic resources to mean all the possible generators (machines, people, etc.) which could produce a class of outcomes, one of which includes the pattern of interest.

The view that the universe had a beginning is not restricted to the ‘big bang’ theory or even the Genesis testimony. A classical argument developed by Jewish and Muslim theologians known as the ‘Kalàm cosmological argument’19,20,21,22 points out that without a beginning it would not be possible to have arrived at the present point in time. That would be like trying to jump out of an infinitely deep bottomless pit. Since time and matter are limited, the resources to solve a problem or to build a complex structure are also limited. Even as many mathematical problems are intractable, the unstated conclusion is also that evolutionary mechanisms cannot even in principle work.

Dembski has now brought us to the point where an important notion can be properly appreciated. This is the concept of a specified probability so low it can never happen. One way of setting a lower bound beyond which an event can be said will never occur takes into account three things: the total number of particles in the universe (1080), the amount of time available using evolutionary cosmology (1025 seconds) and the maximum number of discrete changes a particle could undergo, based on the Planck time (1045 alterations) per second.

Then the total number of specified events throughout cosmic history cannot exceed 10150 (p. 209). If we take half this number, then we can state confidently that we should never expect a specified event whose probability is less than 0.5x10–150 to happen, ever.

This summarizes in my view the key points, neglecting to use Dembski’s mathematical axioms and formal language notation. These have the advantage of minimizing any ambiguity in what is being stated, but often requires some sophistication in this manner of reasoning.

Now, will a formal mathematical basis for identifying design be useful in the creation/evolution controversy? I believe this book will indeed be quoted as a reference to evaluate some of the probability numbers one encounters. For example, MIT biochemist Sauer’s detailed calculations show there is about a 10–65 chance of obtaining a single medium-sized functional protein by chance.23 However, there are thousands of different kinds of proteins in mammals, used to build organs, tissue, enzymes and so on. The chance of three unrelated types of protein forming by chance (ignore the scheduling problem to be generous) is now 10–65 x 10–65 x 10–65 = 10–195. We see immediately that dozens of proteins will not form by chance.

An evolutionist argument that some kind of selection process can overcome chance is only so many empty words without offering a detailed stepwise proposal. The chance of obtaining 999 ‘heads’ upon flipping 1000 fair coins together is exactly the same as obtaining 999 ‘heads’ upon flipping a single coin 1000 times in a row (unless the coin-tosser dies during the longer sequential tries or an earthquake scrambles some of the successful ‘heads’ in the interval. On average, time without guidance plays against our goals).24

The chances that a hypothetical sequence of unguided events would lead to ever more complex structures, which will culminate in a cell complete with DNA which now could undergo Darwinian-type competition and selection is not better than that such a cell should appear in one miraculous unguided jump. The individual steps must all occur, before the duplication apparatus and correction mechanisms can work. During the whole process, the intermediates can undergo countless undesirable processes, destroying any progress made towards our target. The chances that every step would occur, for even one cell, given the total time and material available makes this proposal implausible.

The Intelligent Design Movement

This thesis is an important piece in the logical arsenal used by the growing Intelligent Design (ID) movement.25 The attempts to free academia and research funds from the stranglehold of methodological materialism can only help the creationist movement. One must, however, be careful not to forget the ultimate goal: to bring people to a saving knowledge of the crucified and risen Saviour. Exciting as this new forum is, one may become bogged down in too-theoretical discussions, or become overly careful not to offend those involved who do not share our faith.

I myself support the ID movement as a stepping stone for current evolutionists not willing to go too far out on a limb too quickly. After all, a fortune has been spent claiming how irrational and dangerous Christian fundamentalists are. But I believe a bigger commitment directly in a concrete position, like a literal Genesis reading and young earth model, is more fruitful. For hearts already prepared, one starts closer to our goal, and furthermore this approach permits concrete models to be explored scientifically, like fitting the Flood to the geological record, instead of keeping each and every possible theistic position open.

Footnotes

  1. Recently republished in edited and abridged form as Cooper, B., Paley’s Watchmaker, New Wine Press, Chichester, West Sussex, UK, 1997.
  2. For a discussion of this claim and Dembski’s answer, see on-line article at: <http://www.arn.org/docs/dembski/wd_convmtr.htm>.
  3. Grigg, R.M., Could monkeys type the 23rd psalm? Apologia 3(2):59–63, 1994; updated from his 1991 article, in Creation, 13(1):30–34.
  4. See some very good on-line articles at: <http://www.arn.org/dembski/wdhome.htm>.
  5. Moreland, J.P., ed., The Creation Hypothesis. Scientific Evidence for an Intelligent Designer. InterVarsity Press, Downers Grove, Illinois, 1994.
  6. Dembski, W.A., Chapter 3, In: Moreland, Ref. 5, p.120.
  7. Dembski, Ref. 6, p.133.
  8. Dembski, WA, ed., Mere Creation. Science, Faith & Intelligent Design with Contributions by Michael Behe, David Berlinski, Phillip Johnson, Huge Ross and Others, InterVarsity Press, Downers Grove, Illinois, p. 66, 1998.
  9. For more details on this incident, see Dembski’s on-line article at: <http://www.arn.org/docs/dembski/WD_explfilter.htm>.
  10. See Batten, D., Cheating with chance, Creation 17(2):14–15, 1995, for refutation of some fallacious evolutionary arguments against creationist probability calculations.
  11. Sarfati, J.D., Origin of life: the chirality problem, CEN Tech. J. 12(3):263–266, 1998.
  12. Bergman, J. and Howe, G., ‘Vestigial Organs’ Are Fully Functional, p. 77, Creation Research Society Books, Kansas City, 1990.
  13. Murris, H.R., Vestigial organs: A creationist re-investigation, Origins 5(13):10–15, 1992.
  14. The branch of formal logic systematising the relations between predicates involving the quantifiers such as ‘all/every’, ‘no/none’ and ‘some/a’.
  15. The two theorems in symbolic logic formalized by Augustus De Morgan (1806–1871), relating the negation of conjunctions and disjunctions of propositions p and q:
  1. ‘not (p or q)’ is equivalent to ‘not-p and not-q’ — symbolically ~(pvq) ¼ ~p.~q
  2. ‘not (p and q)’ is equivalent to ‘not-p or not-q’ — symbolically ~(p.q) ¼ ~pv~q
  3. Barrow, J. and Tipler, F., The Anthropic Cosmological Principle, Clarendon Press, 1986. This is probably the most comprehensive study of the fine-tuning of the universe. They, however, reject existence of a Creator.
  4. See also Craig, W.L., Barrow and Tipler on the anthropic principle vs divine design, Brit. J. Phil. Sci. 38:389–95, 1998; similar online article at http://www.leaderu.com/offices/billcraig/docs/barrow.html. Craig points out the central fallacy of their argument for rejecting a Creator. Once this fallacy is removed, the book becomes a compendium of data of modern science which point to design in nature inexplicable in natural terms and therefore pointing to a Divine Designer (although beware of ‘design’ arguments presupposing the unscriptural ‘big bang’).
  5. The atheistic evolutionist Russell Doolittle used this deceitful argument in a debate with Duane Gish at Iowa State University on October 2, 1980, in a desperate attempt to neutralise Gish’s strong probability arguments for creation. This trick succeeded in hoodwinking the audience, because Dr Gish, a biochemist not a probabilist, couldn’t refute it at the time. See Gish, D.T., Creation Scientists Answer Their Critics, Institute for Creation Research, El Cajon, CA, pp. 94–95, 1993.
  6. Craig, W.L., The Kalàm Cosmological Argument, Barnes and Noble, New York, ch. 14, 1979; see also his online article The Existence of God and the beginning of the Universe at http://www.leadereru.com/truth/3truth11.html.
  7. Moreland, Ref. 5, pp. 18–23.
  8. Dembski, Ref. 8 chapter 14.
  9. Sarfati, JD, If God created the universe, then who created God?, CEN Tech. J. 12(1): 20–22, 1998.
  10. Discussed by Stephen Meyer in the on-line article at: <http://www.arn.org/docs/meyer/sm_origins.htm>
  11. Truman, R., Dawkins’ Weasel Revisited, CEN Tech. J. 12(3): 358–361, 1998.
  12. Ury, T.H., Mere Creation conference, CEN Tech. J. 11(1):25–30, 1997.

Newsletter

Get the latest answers emailed to you.

Answers in Genesis is an apologetics ministry, dedicated to helping Christians defend their faith and proclaim the good news of Jesus Christ.

Learn more

  • Customer Service 800.778.3390