On Hypothesis Formation

A comparison of dialectic approaches in classical philosophy and modern approaches to scientific research reveals a consistent theme of dueling opposition as the leading approach to understanding science. A common method of experimental design hinges upon testing hypotheses by building and testing distinct dualistic models which are largely mutually exclusive to produce a binary output. For example, the ancient Greek philosopher Democritus was a proponent of the atom hypothesis which put forth the idea that all matter was composed of physically separate atoms which implied an underlying mechanistic explanation to the nature of reality. In direct opposition to this idea, was Aristotle’s belief that all matter consisted of four core elements (earth, air, water, and fire). Both of these Ancient Greek approaches to understanding matter incorporated a reductionist view breaking down matter into its core components, but ultimately are at odds with one other regarding the details of these components, and thus they could be described as in dueling opposition.

Modern science has emerged from this classical tradition of diametric opposition and the generation of testable hypotheses have largely followed a similar format.  While the presence of technology influences the results of scientific experimentation, it does do not appear to influence the philosophical approach to discovery. For example, Aristotle emphasized that “credit must be given to observation rather than to theories, and to theories only if what they affirm agrees with the observed facts” (Adler 769). Yet, at that timepoint in history it was not possible to easily observe or quantify atomic interactions which ultimately validated Democritus’ theories. However, a similar impasse was reached thousands of years later in the early twentieth century when particle physicists debated the merits of the Standard Model versus the Higsless model to explain the forces behind elementary subatomic particles. Despite substantial advances in technology Modern philosophical approach to scientific research is highly reminiscent to the dialectic method proposed by ancient Greeks: one of pitting diametrically opposed ideas into direct competition dependent upon emerging.

In this way, the modern philosophical approach to science has roots in competitive dualism as the predominant method of obtaining scientific progress. Indeed, during my own undergraduate education, most faculty sought to simplify complex scientific concepts into generalizations of two dueling hypotheses to understand them. As the French physicist Poincare pointed out, “every generalization is a hypothesis” (Adler 329). For example, the nature versus nurture debate regarding adolescent psychological development seeks to divide the major influences on human behavior to either experience dependent learning or innate inherited predisposition. The broader scientific community would likely concede that both categories influence behavioral psychology in different ways in different contexts. However, the presence of such oversimplifications persists within scientific research due to their utility in early educational training of scientists. Scientific research which relies on the formation of dueling hypotheses as a way of understanding the underlying truth are often subject to oversimplification and generalization which inherently lacks a full contextual perspective.

The classical approach of dueling opposition attempts to favor utility of outcome by inherently limiting the number of possible conclusions of a proposed experimental design. As Thomas Kuhn puts it, for any given problem, “There must also be rules that limit both the nature of acceptable solutions and the steps by which they are to be obtained” (Kuhn 38). When one designs an experiment anticipating a binary outcome, then the results of the experiment will support the hypothesis and thereby the model, or refute the hypothesis and thereby the model, but this is not necessarily the case in real world experimentation. For example, Camilo Golgi proposed that the entire nervous system is one continuous network of fibers rather than one of discrete cells while his detractor Ramon y Cajal supported the Neuron theory where cells were individual units. Evidence from careful dissection and microscopic assessment of cells produced evidence to support both hypotheses. Neither Golgi nor Cajal produced results to support one standalone model, and as a result they shared the 1906 Nobel Prize in Physiology and Medicine, much to Golgi’s chagrin. Dual opposition as an experimental design does not ensure binary results to confirm a specific scientific theory or hypothesis.

Furthermore, even binary results within scientific experiments do not necessarily affirm the underlying model or get closer to a natural law. Binary results which confirm or deny a specific hypothesis are limited by the scope of available knowledge underlying the formation of a given model. For example, the early observations of the Danish astronomer Tycho Brahe produced a geocentric model of organization in the solar system largely based on the observations of Nicolaus Copernicus. However, Galileo Galilei utilized many of the same Copernican observations to form a heliocentric model in which the earth orbits around the sun, thereby producing a separate model from similar observations. Another example is the false belief that stomach ulcers were predominantly caused by stress based on correlative data from patient interviews. The 2005 Nobel Prize in Physiology and Medicine was awarded to Barry Marshal and Robin Warren for identifying the bacteria helicobacter pylori as the underlying cause for stomach ulcers (Warren and Marshall 1983). Both examples illustrate how results and data can be interpreted in favor of an incorrect scientific model. A dualistic approach to experimental design does not necessarily ensure a definitive binary output of results in support of a more correct scientific model.

No direct relationship between forming a dueling hypothesis as a means of obtaining increased validity of scientific results has been demonstrated thus far. Admittedly, the dominance of the dualistic approach throughout history has prevented this comparison, but there are also examples of success in the development of scientific theories from a domain without competing models. Perhaps the most famous example of scientific theory as an emergent phenomenon is evident in Charles Darwin’s famous expedition on the Beagle. Darwin writes, “Natural selection acts only by the preservation and accumulation of small inherited modifications, each profitable to the preserved being; and as modern geology has almost banished such views as the excavation of a great valley by a single diluvial wave, so will natural selection banish the belief of the continued creation of new  organic beings, or of any great and sudden modification of their structure” (Darwin 1859). At the time, the theory of Natural Selection was formed from a vast array of observations from many different scientists as an observed consensus, rather than a model specifically designed in order to compete with an accepted model. In modern times, the debate has morphed into a debate between evolution and creationism regarding the origin of man; however, the original theory of natural selection was far less entrenched in disproving a creationist model. Although modern theories, like the theory of natural selection, necessarily contradict or call into question common assumptions within the scientific community, the hypothesis was not generated in order to specifically contradict an existing model.

An alternative approach to addressing dueling models by hypothesis is one which emphasizes concatenation of unbiased observations to inform experimental design. Although it may seem like hypothesis formation based on testing dominant models, this view is less restrictive in its scope and is often referred to as “hypothesis-free” science or experimentation which is “unbiased”. An excellent example of this was the Human Genome Project which involved a massive collaborative international effort bridging between the private sector and government funded labs to gain massive insight into the genetic sequences of homo sapiens. The rippling effects of these discoveries and observations have provided tremendous insights into various forms of cancer as well as diverse array of rare diseases. However, these discoveries were not based on ruling out one model versus its opposite model. Sir Isaac Newton described the value of forming hypotheses on observations rather than models when he wrote, “As an experimentalist, I feel bound to let experiment guide me into any train of thought which I justify, being satisfied that experiment, like analysis, must lead to strict truth if rightly interpreted, and believing also that it is in its nature, far more suggestive of new constrains of thought and new conditions of natural power” (Adler 768).  Experimental data acquisition can produce useful scientific experimental results without necessarily becoming dependent on the formation of dueling hypothetical models.

Importantly, and perhaps ironically, alternative approaches of observation and hypothesis-free testing can also produce novel hypotheses and models for future testing. Provided the accumulation of sufficient observations, one can more easily identify causative relationships between variables. The modern study of statistics employs these assumptions in the idea that for any variable with normal (i.e. Gaussian) distribution there will be finite mean (µ) and finite deviation (σ). As the number of observations (N) approach infinite, there is a convergence upon the true mean, and the deviation reduces. An example of this in a simpler context, if one wanted to know the average age of humans, then the most accurate measure would be to collect every single person’s age on earth. However, the calculated mean from the first two individuals will likely be inaccurate, but concatenation of all the data will converge on the true mean and have the added benefit of providing more information for multiple different tests. More importantly, in the absence of a well-constructed hypothesis one can still utilize the large dataset to test existing alternative hypotheses which is exactly what happened the accumulated 1,000 Genomes Project. The large amount of DNA sequenced provided researchers with the ability to identify differential rates of disease and heredity in different populations. Large scale hypothesis-free testing can produce results in high enough abundance that they give rise to new testable hypotheses within the large dataset.

More importantly, in the absence of an established model which describes some natural phenomena, then the curious scientist must generate one in the absence of competing models. Every historical example of a proposed competing scientific models at one point necessarily originated from observations in an environment devoid of proposed models. For example, Richard Feynman’s Parton model depended on Albert Einstein’s general theory of relativity which depended on Sir Isaac Newton’s theory of gravity which was dependent upon Johannes Kepler’s laws of planetary motion, which were dependent upon Nicolaus Copernicus heliocentric theories, which have been attributed to Aristarchus in ancient Greece, and so on ad nauseum. At a certain point in tracing the historical origins of modern scientific theory, one reaches an event horizon in which there must presumably be innovation of thought based solely on observations rather than a formal response to a developed model. An illustration of the development of a scientific model which was independent from other models of the same era was the ancient Mayan calendar based on astronomical observations using azimuths and tracking of celestial bodies. These observations resulted in a working model based on measured observation of the natural world independent from those identified by Ptolemy in ancient Greece. In the absence of a stratified falsifiable scientific model, it becomes necessary to construct one’s own model from accumulated scientific observations.

Testing a specific model via a firmly entrenched hypothesis puts a scientist at risk for becoming highly dependent on a specific outcome or anticipated scientific result. A hypothesis that there is a specific causal relationship between two variables biases the researcher to anticipating and relying upon that result to become true for scientific progress to move forward rapidly. An example of this approach would be to test either Gene A or Gene B as the hypothesized underlying cause of a disease state. If one believed a particular gene (Gene A) causes a specific disease, then one could analyze Gene A to see how it changes in a healthy individual versus a diseased one. This method will confirm or deny the hypothesis, but one result is clearly a more beneficial outcome than the other. First, let us assume the result is that the Gene A level is greatly changed in a disease state compared to healthy state. This is a highly desirable result and entices the researcher to investigate more deeply into Gene A and its associated biology. Alternatively, let us assume that the researcher finds no change in the Gene A. This result is not appealing to collaborators, grant reviewers, or the scientist in terms of collecting accolades, funding, or most importantly scientific insight. Now the researcher is faced with returning to test Gene B, Gene C, etc. ad nauseum with only an incremental increase in the likelihood of achieving a desirable result: the identification of a causative gene. Restricting scientific experimentation to ruling between dichotomous hypothetical models tends to put the researcher at risk of implicit bias toward one more favorable result because it greatly restricts the scope of experimental results.

Designing experiments with a hypothesis that favors a particular model can put a scientist at an increased risk for Type I errors. A Type I error is rejection of a true null hypothesis (false positive). A real-world example of this concept is the early Miasma theory regarding the cause of bubonic plague in Europe. The Miasma theory hypothesized that the origin of the black plague epidemic was from the spread of disease from rotting organic matter forming “bad air”, with the null hypothesis that it is not caused by airborne spread. To test this idea, one might remove rotting organic matter, and find that the rates of infection reduce and conclude that the Miasma theory is correct, but this is a Type I error. It is easy to imagine that less rotting organic matter would result in less animals like rats trying to eat food waste, which means less fleas, which in turn means less bacterial infection from flea bites. The Miasma theory was eventually replaced by the Germ theory when it was discovered that the spread of the black death was from Yersinia pestis bacteria from flea bites. By specifically favoring results to bolster the Miasma model, one risks concluding a false positive that organic decay causes bubonic plague. Perhaps a more contemporary example is perceived correlation that vaccines cause autism. Vaccines are typically introduced to neonatal children in the first year of life. Abnormal behavior or differences in the sensory perception of autistic infants typically occurs when these behaviors are observable, during the first years of life.  The timing of diagnosis shortly after vaccination has led to the erroneous conclusion that these two events are somehow causally related based solely on a temporal relationship, but by this same token the use of diapers or infant formula could be determined as the cause of autism. Over reliance on one specific model leads to interpretation of results which favors the hypothesis underlying that model, this is commonly referred to as confirmation bias. Interpreting data to specifically address one hypothesis over another tends to bias the interpreter toward a particular result, or set of results, and thus an increased risk of Type I errors.

Similarly, reliance upon a specific model when forming a hypothesis can also put the researcher at higher risk for Type II errors.  A Type II error is the acceptance of a false null hypothesis (false negative), which leads the researcher to abandon what would otherwise be the correct course of action. For example, triple-negative breast cancer involves the growth of tumors which do not express estrogen receptors, progesterone receptors, or HER2 protein receptors. While one could conclude that a tumor is benign due to being negative for these markers, in some cases these cancerous tumors continue to grow and proliferate. A clinician, pathologist, or diagnostician of some sort may favor an outcome that their patient does not have breast cancer but instead that they suffer from a different ailment and interpret the triple negative incorrectly in some rare cases, therefore producing  a Type II error. Another example of a proclivity for Type II error could occur in genetic knockout phenotyping in neurobiology. Let us say that a scientist makes a transgenic animal which lacks a specific gene (Gene A), which has been proposed by a competing lab to cause blindness. The scientist examines all of the rod and cone cells of the retina and concludes that the eye is normal, and Gene A does not cause blindness. To be sure of this, the scientist would also have to examine all of the amacrine cells, the retinal ganglion cells, the Muller glia, the vasculature, the tectum, the visual cortex, ad nauseum. In both examples, the continuation of additional testing requires time, energy, money and the motivation to continue may be curtailed by premature acceptance of the null hypothesis.  Typically, favoring one theory over another leads to favoring of one hypothesis over another, which in turn can lead to favoring one interpretation of results over another which can lead to Type II error also known as acceptance of a false negative.

Given the tendency of researcher bias toward particular theories, recent support of hypothesis-free testing has attempted to rectify tendency toward error but this approach is limited by its inability to directly refute potential outcomes. Despite the ability to examine tens of thousands of genes from thousands of organisms simultaneously, Nobel laureate Sydney Brenner once referred to this approach as “high-throughput, low input, no output” science due to its lack of ability to address specific theories (Friedberg 2008). For example, the field of developmental biology often favors single-cell RNA sequencing to assess changes in gene expression over time with the idea that given a large enough dataset, the relevant cell types will cluster together. The vast cell number and implicit heterogeneity of the dataset within limited depth of transcript detection coupled with machine learning techniques which function independent of hypotheses produce clusters which can be classified as different cell types. However, the relevance of an ever-increasing number of cell types in addressing important questions in developmental biology remains unclear. In many cases, identifying heterogeneity within a cell population does not explain the function of these cells, the origin of these cells, or how they contribute to the broader organization of a tissue or body plan of an organism. Hypothesis-free testing and high-throughput data clustering can produce results which often fail to address scientific paradigms within a research field, and they ultimately fade into obscurity due to an overly methodological approach which fails to accept or reject a model.

Another challenge that hypothesis-free testing faces is similar to the Type I error associated with a false positive associated with an inability to direct research based on a specific underlying hypothesis. Large-scale data acquisition will produce statistically significant differences between conditions ipso facto given enough observations to compare. For example, in recent years the field of experimental psychology has relied heavily on functional magnetic resonance imaging (fMRI) which can collect massive amounts of data from the entire brain with spatial and temporal resolution that is quite impressive from a historical perspective. Furthermore, massive collections from tens of thousands of healthy individuals from various international human brain mapping collaborations produces unfathomable amounts of data. Given the vast scope of such data, it is relatively easy to produce statistically significant results between brain regions or identify correlative activation in regions which may not be relevant. This has led to both a series of published works which are irreproducible, as well as conclusions which fail to address the crux questions of human consciousness because they do not originate from a falsifiable hypothesis. Given enough data, conclusions can easily be reached which may not be the same as Type I error, but still serve as red herrings for a field and ultimately prolong stasis of scientific understanding.

Supporters of hypothesis-free testing often tout hypothesis generation as a strength of the approach while detractors claim that it fails to resolve disagreements between theories. Thomas Kuhn writes, “We often hear that they are found by examining measurements undertaken for their own sake and without theoretical commitment. But history offers no support for so excessively a Baconinan method […], laws have often been correctly guessed with the aid of a paradigm years before apparatus could be designed for their experimental determination” (Kuhn 29). However, notable breakthrough hypotheses have been credited to inductive reasoning independent of an established dichotomy of theories. For example, the origin of the structure for the Benzene ring has been associated with August Kekule’s dream as he describes it, “I was sitting writing on my textbook, but the work did not progress; my thoughts were elsewhere. I turned my chair to the fire and dozed. Again the atoms were gamboling before my eyes. This time the smaller groups kept modestly in the background. My mental eye, rendered more acute by the repeated visions of the kind, could now distinguish larger structures of manifold conformation; long rows sometimes more closely fitted together all twining and twisting in snake-like motion. But look! What was that? One of the snakes had seized hold of its own tail, and the form whirled mockingly before my eyes. As if by a flash of lightning I awoke; and this time also I spent the rest of the night in working out the consequences of the hypothesis” (Olah 2001). While this hypothesis was groundbreaking in organic chemistry, critics have noted that Kekule concocted this dream explanation in order to detract from the contributions of Alexander Butlerov and Andrew Couper, other chemists at the time (Browne 1988)(Sorrell 1998). Hypotheses can be generated independent from the recent approach of unbiased high-throughput screen and hypothesis can be generated independent from dueling theories within the field of interest.

Another criticism of unbiased or hypothesis-free generated scientific questions is that they fail to address outstanding questions that predominate within a given field. In this sense, generating a question with an unresolved answer remains as problematic as coming up with an answer without any resolved question. For example, the wide application of single cell transcriptomics has enabled numerous researchers to claim that they have identified unique cell types. In this case, the answer researchers found is there are more cell types (i.e. more tissue heterogeneity) than we previously thought. However, the implied question would then be: are there more cell types in a given tissue? Perhaps such a question is not particularly relevant to advancing a field. While a more relevant scientific question might be, “how does a tissue organize and develop over time?”. One risk that hypothesis-free scientific approaches face is failing to adequately address unresolved questions in scientific research.

The danger of not directly addressing predominant theories in the field can negatively affect the impact of a study by exempting it from overt criticism. In many ways, the hypercompetitive nature of modern scientific research has presented problems for aspiring researchers attempting to obtain funding, but one benefit in highly contested fields may be the abundance of skeptics monetarily incentivized to identify flaws in their competitor’s arguments. In this sense, directly addressing a known area of hotly contested research with a novel hypothesis places the impetus on the researcher to triple-check their own assumptions when generating predictions for future research. On the other hand, if a researcher avoids such confrontation with hypothesis-free testing, they may put themselves at a disadvantage by removing the number of interested skeptical competitors who may review their work. For example, within the field of inner ear physiology, a hotly contested topic has been: how hair cells of the inner ear engage in mechanotransduction. Vigorous debate ensued between two different camps over whether a specific MET channel was responsible for depolarizing hair cells and enabling downstream neurotransmitter release to generate action potentials in afferent neurons going to the brain. In this example, claims about the MET channel had to be carefully tempered and hypothesis testing had to anticipate and address potential claims from detractors to be taken seriously. Thus, the controversy and competition generated by detractors from a hypothesis can serve as important check to balance ongoing research which is important for scientific progress. Insufficient conflict over a novel hypothesis can lead to half-hearted scientific debates that are no more than two ships passing in the wind, which leaves results lacking scrutiny.

Novel hypothesis generation from the so-called unbiased or hypothesis-free testing perspective can also fail to produce interest in a given area of research. Avoiding conflict not only results in diminished scrutiny but can also fail to further a scientific discipline because it is too far beyond the scope of a particular area of study.  As an illustration, look to the findings of Gregor Mendel in his study of heredity using pea plants. While these observations are meaningful in the field of genetics, the inapplicability of these observations to any theories of the time made them largely overlooked. For this reason, hypotheses which fail to address the preeminent dogma of the time or directly conflict with competing findings also often fail to stir enough interest necessary for longstanding consideration by the scientific community.

The arguments presented thus far have provided insights for various pitfalls associated with different strategies for hypothesis formation, but what then are common criteria for hypothesis formation which are likely to yield success? While there is a high degree of unexpected findings in science, there are some universalities that can be gleaned from a rich history in scientific research to facilitate trainees to develop hypotheses to yield skillful results. As Louis Pasteur put it, “fortune favors the prepared mind”. The task to generate an all-encompassing list of how to construct a perfect hypothesis is  a daunting one but, “If the coherence of the research tradition is be understood in terms of rules, some specification of common ground in the corresponding area is needed” (Kuhn 44). Thus, here I put forth a few criteria which make a sound hypothesis which are hopefully independent of a trainee’s methodological constraints and pedigree relating to established models or theories.

A hypothesis ought to directly address outstanding unanswered questions within the field of research. As previously mentioned, a hypothesis which fails to engage in the pursuant scientific thought in each field is at a high risk of becoming obsolete. This obsolescence can stem from perceived inapplicability of the question at hand, a failure in sufficient scrutiny from peers, or results which leave researchers nonplussed. For example, the hypothesis that that two different species of rodents are capable of visually discriminating similar types of objects fails to address larger questions in the field of visual neuroscience. Instead, the question likely needs to focus more specifically on areas that are currently undergoing active pursuit. Currently, mechanisms of synaptic plasticity or systems-level processing between different visual areas in order to facilitate encoding are more favorable areas of pursuit than the previously mentioned example. A hypothesis needs to directly address open-ended questions within the desired field of study.

A hypothesis ought to be testable and falsifiable. While one can imagine an almost infinite number of questions surrounding a phenomenon, the inability to test such ideas makes the process of infinite hypothesis generation overly speculative. Researchers need to strike a balance between inductive empirical logic and results-based observations from pragmatism in order make predictions about future events which can be tested with some finality. For instance, hypotheses which seek to negate an idea within a field without positive data to support an alternative are often doomed to fail due to the difficulty of “proving a negative”. An example of a bad hypothesis in this vein would be: DNA sequence is not the most important aspect of gene expression. In this example, the test of “importance” is vague to quantify and even an additional finding pointing to other phenomena would not necessarily negate DNA’s role in gene expression. A more direct hypothesis that can be tested would perhaps be DNA nucleotide sequence can be modified by methylation in order to silence gene expression. In this example, the implication is similar but there is a way to measure and test via expression levels and methylation levels in order to either support the hypothesis or refute it. The strongest hypotheses are often only validated in retrospect through interpretation of results, but upon initial formulation, a good hypothesis should be phrased in such a definitive way that there is no risk of arriving at an unresolved answer.

A hypothesis ought to be independent from a specific desired result.  Implicit bias to bolster support for a given theory or outcome is widespread across many disciplines, but it behooves the emerging scientist to formulate hypotheses which contribute important information regardless of the outcome. For example, an unresolved question in a given field may be, “what role does Gene A play in Alzheimer’s disease etiology?”. A bad hypothesis would be: Gene A causes Alzheimer’s disease because the threshold for proving causative effects of Gene A for an already well studied and genetically complex disease is quite high. Anything short of groundbreaking results which change the field of neurodegenerative research would be failure. For instance, assume the results show that there is a correlative increase in Alzheimer’s within a subpopulation of patients. In this case, the hypothesis would not be supported as stated, and would instead require revision. A better hypothesis would be: Gene A significantly increases the risk of developing Alzheimer’s disease in population A. In the second example, the question of Gene A’s role in Alzheimer’s disease is both more specifically addressed and the burden of proof is lowered to ensure that the researcher does not entirely depend on one all-or-none outcome.  Careful drafting of hypotheses can ensure that the scientist is not paving a path to failure by remaining overly-expectant on one outcome.

A well-formed hypothesis should anticipate outcomes which generate additional hypotheses.  One trait of effective scientific research is that it invariably inspires further novel research without remaining entrenched in overly cyclical iterations of the same questions. While drafting a hypothesis a researcher may anticipate several outcomes and, in the best constructed hypotheses, all of these outcomes should produce additional novel questions of interest. For example, assume that a researcher is examining the mechanism of a drug which reduces symptoms associated with long-term depression. A good hypothesis might anticipate several different intracellular signaling pathways and for each pathway, the researcher’s imagination should ignite with other hypothetical interactions. However, if several different pathways produce a definitive answer which leaves the scientist uninterested in further pursuit, then it is best to restructure or reconsider the course of research.  In this sense, although outcomes are often unpredictable, the scientist should not be naïve of the consequences of the outcomes and should anticipate their relative importance as a useful exercise.

A well formed hypothesis strategically anticipates consequential impacts of potential results. While “fortune favors the prepared mind”, it also favors the bold and the more noteworthy scientific advances tend to be ones that involve significant changes in experimental paradigms of an era, which at times come with some risk to the scientist’s career and potentially safety. While anticipating the potential experimental outcomes of a hypothesis, scientists also benefit from conceptualizing the potential consequences of such research on a societal level. Notably, the example given earlier of h pylori as the cause of stomach ulcers was in part demonstrated by consumption of the bacteria by its discoverer, quite a committed gesture for a man who had a clear implication for the results of the research. Likewise, Charles Darwin carefully weighed the consequences of publishing Origin of Species for a long time before deciding to make his work public. In contrast, Albert Einstein was much more concerned with understanding mass and energy than the inevitable application of such discoveries. After witnessing the vast destruction which arose from the atomic bomb Einstein famously remarked, “If I had known, I should have become a watchmaker”. The relative scientific and societal impact, whether large or small, should be carefully considered during the formulation of a hypothesis.

Works Cited

Adler, M.J. (1992) The Great Ideas: A Lexicon of Western Thought by Mortimer J. Adler Macmillan Publishing Company

Aristotle, De Generatione et Corruptione, translated as On Generation and Corruption by H. H. Joachim in W. D. Ross, ed., The Works of Aristotle, vol. 2 (Oxford: Oxford, 1930)

Browne, M. (1988) The Benzene Ring: Dream Analysis New York Times August 16, 1988, Section C, Page 10.

Darwin, C. (1859) On the Origin of Species

Darwin, C. (1871) The Descent of Man

Friedberg, E.C. (2008) An Interview With Sydney Brenner Nature Reviews January 2008.

Kuhn, T. (1996) The Structure of Scientific Revolutions The University of Chicago Press

Olah, G. (2001) A Life of Magic Chemistry : Autobiographical Reflections of a Nobel Prize Winner , p. 54

Sorrell, T. (1998) Organic Chemistry University Science Books

Warren, JR. and Marshall, B. (1983) Unidentified Curved Bacilli on Gastric Epithelium in Active Chronic Gastritis Lancet  1983 Jun 4;1(8336):1273-5.