Do i italicize the title of my essay

Let's talking about do i italicize the title of my essay.

He always answered the same thing: "It's about mistakes, but it's not an autobiography!" This caused some laughter and the occasional approval: "What an interesting idea." My goal was simple: to correct the impression that the great advances of science are round success stories. The truth is that nothing can be further from the truth. It is not so much that the path that leads to triumph is marked by errors; is that the greater the prize, the greater the blunder can be. Apparently, it is much more difficult to make life or the mind understandable to themselves. Although this book deals with some of the most remarkable efforts to understand life and the cosmos, it deals more with travel than with destiny. I have tried to focus my attention on the process of thought and on the obstacles on the road to discovery rather than on one's own achievements. There are many people who have helped me along the trip, some perhaps without knowing it. As always, my wife, Sofie, has been my most patient and understanding ally. Chapter 1 Errors and big mistakes Big mistakes, like big strings, are usually made by a large number of threads. Take the wire thread by thread, take separately all the small determinants; you will break them one after the other and exclaim: this is worth nothing! People who had never shown the slightest interest in the game now felt expectant before what was already known as the "duel of the century." However, in the twenty-ninth movement of the first match, in a position that seemed to lead to a draw, Fischer chose a move that even an amateur chess player would have instinctively rejected as an error. This could have been a typical manifestation of what is known as "chess blindness", a mistake that in the literature of the game is denoted by «? The most surprising thing was that this mistake was made by a man who had crushed his rivals on the way to the confrontation with the Russian Spassky with an extraordinary sequence of twenty successive victories against the best players in the world. Oscar Wilde wrote that "experience is the name we give to our mistakes". No doubt we all commit many in our daily lives. We leave the keys inside the car, we invest money in the wrong values, we greatly overestimate our capacity to perform multiple tasks, and often we blame our misfortunes on the most absolutely wrong causes. This attribution error is, of course, one of the reasons why we seldom learn from our mistakes. In all cases, of course, we realize the errors only after committing them; hence Wilde's definition of "experience." In addition, we are much better at judging others than at analyzing ourselves. Even processes built with the greatest care and attention, such as those related to the criminal justice system, fail occasionally, and sometimes in the most heartbreaking way. By "scientific errors" I mean conceptual errors that can endanger large schemes and complete theories or that, at least in principle, can delay the progress of science. Human history is replete with examples of monumental blunders in a wide range of disciplines. Some of these errors with significant consequences can be traced back to the Scriptures, or to Greek mythology. But these examples barely scratch the surface. Both soldiers were unable to assess the insurmountable power of "General Winter," the long and harsh Russian winter for which they were so poorly equipped. After all, is not the glory of modern times precisely in the establishment of science as a scientific discipline, and of error-proof mathematics as the "language" of fundamental science? So, did the theories of these illustrious minds and others really get away? comparable thinkers of the most serious errors? The purpose of this book is to present in detail some of the most surprising errors of some scientists of real stature, and to follow the unexpected consequences of those errors. Ultimately, however, I am confident that I can demonstrate that the path to discovery and innovation can be built even along the unlikely path of error. As we shall see, the delicate strands of evolution are interwoven in all the concrete errors that I have selected to explore in depth in this book. Thus, I will deal with large errors related to the theories of the evolution of life on Earth, the evolution of the Earth itself and the evolution of our entire universe. This is not the original meaning of the word. In Latin, evolutio was referring to unrolling and reading from a book in the form of a scroll. Even when the word began to become popular in biology, at first it was only used to describe the growth of an embryo. Even so, the last word of The Origin is "evolved." That I focus my attention on the evolution of life, the Earth and the universe should not be interpreted as an indication that these are the only areas in which one has screwed up. If I have chosen these specific topics, it is for two main reasons. The first is that I wanted to make a critical review of the mistakes made by some of the scholars that almost all of us placed in the first lines of our list of great minds. The blunders of such luminaries, even if they are from a past century, are extraordinarily relevant for the questions posed by scientists today. As I hope to show, the analysis of these errors forms a body of living knowledge that is captivating in its own right, but that can also be used to guide actions in areas as disparate as scientific practice and ethical behavior. The second reason is simple: issues related to the evolution of life, the Earth and the universe have intrigued humans since the dawn of civilization and have inspired untiring inquiries to reveal our origins and our past. The intellectual curiosity of humans towards these questions is found, at least in part, at the roots of religious beliefs, the mythological accounts of creation and philosophical inquiries. In addition, the most empirical, evidence-based aspect of this curiosity is what eventually led to the birth of science. The progress that humanity has made in deciphering some of the complex processes that intervene in the evolution of life, the Earth and the cosmos are nothing short of miraculous. It is hard to believe, but today we think that we can reconstruct the cosmic evolution until the moment when the age of our universe was only a fraction of a second. Even so, we still have many questions to answer, and the question of evolution is still a hot topic even in our days. It took me a long time to decide which of the great scientists I should include in this journey through deep intellectual and practical waters, but in the end I opted for the mistakes of five characters. In each of the cases, I approach the central theme from two quite different perspectives, although complementary. On the other hand, I will briefly examine the various types of errors and try to identify their psychological causes. As we will see, not all errors are the same, and in fact those made by the five scientists on my list are quite different in nature. Darwin's error consisted in not understanding the true implications of a given hypothesis. Kelvin missed by ignoring unforeseen possibilities. Pauling's blunder was the result of an excess of confidence born of his previous successes. Hoyle was wrong in his stubborn defense of disagreement with the mainstream of science. Einstein failed because of a mistaken sense of what constitutes aesthetic simplicity. The main thing, in any case, is that along the way we will discover that errors are not only inevitable but are an essential part of the progress of science. The development of science is not a direct path to the truth. If not for the false starts and dead ends, the scientists would go too far in the wrong ways. They served to dispel the fog through which science was advancing, with its usual succession of small steps occasionally punctuated by spectacular leaps. I have organized the book so that, for each of the scientists, I first present the essence of some of the theories for which it is best known. These are concise summaries that are intended to serve as an introduction to the ideas of these teachers and to provide the appropriate context for the errors, but which are not intended to be full descriptions of their respective theories. In addition, I have decided to focus on each case in only one of its mistakes instead of reviewing the list of all the mishaps that these wise men may have committed throughout their long careers. During a casual walk on a spring afternoon it is very likely that we will encounter several types of birds, many insects, perhaps a squirrel, a few people and a variety of plants. At one extreme are the bacteria, with a length of just one hundredth of a thousandth of a centimeter, and on the other the blue whales, more than 30 meters long. Other birds, such as the Indian geese or the swans, during their migrations usually fly over 7,000 meters. Not to be outdone, marine organisms reach similar records of depth. When they finally hit bottom at the record depth of 10,912 meters, they were stunned to discover a new type of abyssal shrimp around which the pressure of 1157 atmospheres did not seem to bother at all. He described it as a gelatinous landscape as desolate as the Moon. However, he also said he saw shrimp-like organisms no more than two or three centimeters in length. Nobody knows for sure how many species live on Earth today. A recent catalog, published in September 2009, formally describes and gives name to around 1.9 million species. But since most of the species are microorganisms or invertebrates of very small size, many of them difficult to observe or capture, most of the estimates of the total number of species do not go beyond being more or less informed conjectures. In general, estimates vary between 5 million and around 100 million different species, although a figure between 5 and 10 million is considered probable. This great uncertainty can not surprise anyone who understands that a spoonful of the earth we tread can hold many thousands of species of bacteria. The second surprising thing that characterizes life on Earth, in addition to its diversity, is the incredible degree of adaptation shown by plants and animals. There are many different biological species that live thanks to a portentous interaction of the type of "scratch my back and I scratch it to you": a symbiosis. The clown fish, for example, lives among the stinging tentacles of the magnificent anemone. The tentacles protect the clownfish from their predators, and the fish returns the favor by shielding the anemone from other fish that feed on them. A special mucus of the body of the fish protects it from the poisonous tentacles of its host, perfecting this harmonic adaptation. The relations of mutualism have developed even between bacteria and animals. For example, in the hydrothermal vents of the seabed, mussels were found bathed in rich fluids in hydrogen; if they lived there it was thanks to the fact that they housed and exploited an internal population of bacteria that consumed hydrogen. Similarly, it was discovered that a bacterium of the genus Rickettsia conferred advantages for the survival of the whitefly of the sweet potato, and in turn for its own. By the way, a popular example of an extraordinary symbiotic relationship probably does not go beyond myth. Many texts describe the reciprocal relationship between the Nile crocodile and a small bird known as pluvial or Egyptian plover. According to the Greek philosopher Aristotle, when the crocodile yawns, this little bird "flies to the inside of its mouth and cleans its teeth", with which the pluvial gets food and the crocodile "gets relief and well-being". However, we do not have a single observation of this symbiosis in the modern scientific literature, nor is there any photography or film that documents this behavior. Perhaps it should not surprise us so much if we consider the questionable record of Pliny the Elder: many of his scientific claims turned out to be wrong! Ideas like these already made their appearance in the first century of our era. Cicero was also the first to resort to the metaphor of the watchmaker who would later become the key argument in favor of an "intelligent designer". This was precisely the argument adopted by William Paley almost two thousand years later: an invention implies an inventor in the same way that a design implies a designer. A complicated clock, Paley argued, is testimony to the existence of a watchmaker. Therefore, should not we conclude the same thing about something as exquisite as life? Implicit in the design argument was another dogma: it was believed that the species were absolutely immutable. The idea of ​​eternal existence was rooted in a long chain of convictions about other entities that were considered resistant and unalterable. In the Aristotelian tradition, for example, the sphere of the fixed stars was supposed to be completely inviolable. Only in the time of Galileo did this idea shatter completely with the discovery of "new" stars. Similarly, the laws of motion and gravitation formulated by Newton applied to everything from the fall of apples to the orbits of planets, and seemed to be decidedly immutable. These were the currents and tides of thought that prevailed about life until a man had the self-confidence, the vision and the deep understanding that allowed him to weave, with a huge ball of disparate threads, a magnificent tapestry. This man was Charles Darwin, and his great unifying concept has become the most inspiring non-mathematical theory of mankind. Darwin literally transformed ideas about life on Earth from a myth into a science. Before delving into the central arguments of El origen, it is important to understand what is not discussed in that book. Darwin does not say even a single word about the real origin of life or about the evolution of the universe as a whole. Psychology will sustain itself on a new foundation, that of the necessary acquisition of each power and mental capacity gradually. Light will be shed on the origin of man and his history ». However, most of the important intellectual work on evolution had already been done in The Origin. In one fell swoop, Darwin had finished with the concept of design, had dispelled the idea that species should be eternal and immutable, and had proposed a mechanism that allowed adaptation and diversity to be explained. In simple terms, Darwin's theory is constituted by four main pillars supported by a singular mechanism. The pillars are: evolution, gradualism, common descent and speciation. Let's see very succinctly what are the different components of Darwin's theory. The description will follow Darwin's own ideas more than a modern and updated version of these concepts. However, in some moments it will be practically impossible to avoid the presentation of observations and evidence accumulated since the time of Darwin. As we will discover in the next chapter, however, Darwin made a grave mistake that could have completely invalidated his newest and most important idea, that of natural selection. Figure 2 The first essential aspect of the theory is evolution itself. Although some of Darwin's ideas about evolution had an older history, the French and English naturalists who preceded him failed to come up with a convincing mechanism to explain evolution. In other words, the species we see today did not always exist. They are descendants of other previous species that became extinct. Modern biologists tend to distinguish between microevolution and macroevolution. Microevolution comprises small changes that are the result of the evolutionary process over relatively short periods of time, usually within local populations. Macroevolution refers to the results of evolution over longer time scales and generally between species, and may also include episodes of mass extinction, such as the one that killed dinosaurs. Darwin borrowed the idea embodied in his second pillar, that of gradualism, mainly from the work of two geologists. The geological record showed patterns formed by horizontal bands that covered large geographic areas. This, together with the discovery of different fossils within these bands, suggested a progression of gradual change. Darwin argued that just as geological action shapes the Earth slowly but surely, evolutionary changes are the result of transformations that span hundreds of thousands of generations. Against uniformitarianism, however, the rate of evolutionary changes is usually not uniform over time for a given species, and may vary even further from one species to another. As we will see later, it is the pressure exerted by natural selection that determines more importantly how quickly evolution manifests itself. Some "living fossils," such as the lamprey, do not seem to have evolved in 360 million years. The next pillar of Darwin's theory, the concept of a common ancestor, is what in his modern incarnation has become the main motivator of all searches made in our days of the origin of life. Darwin first argued that there is no doubt that all members of any taxonomic class, for example, all vertebrates, originated from a common ancestor. But his imagination took him much further than this idea. One will wonder, however, that if all life on Earth had its origin in a single common ancestor, how did the extraordinary diversity that we see emerge? After all, that is the first characteristic of life that we thought it required an explanation. Darwin did not daunt and took the bull by the horns; it is not by chance that the title of his work bears the word "species". In Darwin's solution to the problem of diversity, another original idea intervened: that of branching, or speciation. Darwin reasoned that life should have started from a common ancestor in the same way that a tree has a single trunk. In the same way that branches are born from a trunk, which are then divided into smaller ones, the "tree of life" evolved through many branching events, creating different species in each of the bifurcation nodes. Many of these species became extinct, just as the branches of a tree break and die. However, as in each branch the number of descendant species of an ancestor is doubled, the number of different species can grow enormously. But when does speciation occur? According to current theories, especially when a group of members of a specific species is geographically separated. For example, a group may move towards the rainy slope of a mountain range while the rest of the species remains on the dry slope. On rare occasions, speciation could create new species that arise from the cross between two species. To his surprise, in 2011 a group of scientists confirmed the Nabokov conjecture with the help of sequencing technology. Darwin was sufficiently aware of the importance of the concept of speciation for his theory as to include a schematic diagram of his tree of life. In fact, this is the only figure that appears throughout the book. At a more detailed level, a combination of molecular and paleontological data has allowed us to draw, for example, a phylogenetic tree with a relatively fine resolution and dating of all current or extinct mammal families in very recent times. In the end, I came to the conclusion that two of the absolutely essential constituents were simplicity and something that is known as the Copernican principle. By "simplicity" I mean reductionism, in the sense in which most physicists understand it: the ability to explain as many phenomena as possible with as few laws as possible. This has always been, and still is, the goal of modern physics. Figure 3 Copernicus taught us that Earth is not the center of the solar system, and all the later findings of astronomy have only reinforced our conviction that, from a physical perspective, humans do not play any special role in the cosmos. We live on a tiny planet that revolves around an ordinary star of a galaxy that contains hundreds of billions of similar stars. Our physical insignificance goes even further. That is, we do not have anything special. Both reductionism and the Copernican principle are authentic characteristics of Darwin's theory of evolution. With a unified conception, Darwin explained almost everything concerning life on Earth. It can hardly be more reductionist. But, in addition, his theory was Copernican to the core. Humans evolved in the same way as any other organism. In the tree analogy, all young buds are separated from the main stem by a similar number of bifurcation nodes; they only differ in that they point in different directions. The humans decidedly do not occupy an exceptional or unique place in this scheme: they are not the lords of creation, but the product of adaptation and development from their ancestors on Earth. This was the end of "absolute anthropocentrism." All living beings on Earth are part of the same big family. To a large extent, what has fueled the opposition to Darwin for more than 150 years has been precisely that fear that the theory of evolution will expel humans from the pedestal on which they have placed themselves. Darwin has forced us to rethink the nature of the world and of humans. Usually the British geneticist J. is quoted in this regard. Today we know that even in terms of the size of the genome, humans, whether we believe it or not, we fall short of a freshwater ameboid called Polychaos dubium. Darwin's theory, therefore, amply satisfies the two applicable criteria for a theory to be truly beautiful. It is not surprising, then, that the origin has instigated what is perhaps the most drastic change in thought ever caused by a scientific treatise. Returning now to the theory in question, Darwin was not satisfied with making claims about evolutionary changes and the production of diversity. He believed that his main task was to explain how those processes had taken place. To reach his goal, he had to devise a compelling alternative to creationism that would explain the apparent design of nature. This was puzzling even for the naturalists more inclined to evolution that preceded Darwin: if the species are so well adapted, how can they evolve and still remain well adapted? Darwin was very aware of this problem, and he was concerned because his principle of natural selection explained it in a satisfactory way. The basic idea that underlies natural selection is quite simple. Nonetheless, Wallace was very clear about who he thought deserved most of the credit. In a letter to Darwin of May 29, 1864, he wrote: As far as the theory of natural selection itself is concerned, I will always maintain that it is yours and only yours. Let's try to follow the line of Darwin's thought. First, he realized that species tend to produce more offspring than they can survive. Second, individuals of a given species are never quite identical. To this principle, by which every small variation, if it is useful is preserved, I have called it natural selection. In other words, over many generations, the beneficial mutations end up being imposed while the harmful ones are eliminated, which results in the evolution towards a better adaptation. For example, it is easy to see that being faster can benefit both the predator and its prey. There are several elements that combine effectively to configure the complete concept of natural selection. Second, populations tend to have such high reproductive potential that, if not subject to some type of limitation, would lead to exponential growth. For example, the female of the sunfish produces up to three hundred million eggs at a time. If only 1 percent of those eggs were fertilized and survived to adulthood, we would soon have the oceans full of moonfish. In a metaphorical way, we can see the selection process as a sieve by a huge sieve. Larger particles stay in the sieve, and those that pass through are eliminated. The environment is the agent that shakes the sieve. However, current biologists hardly use this expression because it can give the erroneous impression that only strong and healthy individuals survive. In fact, "survival of the fittest" meant to Darwin exactly the same as "natural selection." That is, the organisms with characteristics favored by selection and heritable are the ones that most successfully transmit them to their offspring. It should be noted a third aspect of extreme importance on natural selection, and is that it really consists of two steps that occur in sequence, the first of which involves mainly chance or chance, while the second is decidedly non-random. In the first step, a heritable variation occurs. In the modern language of biology, we understand this as a genetic variation introduced by random mutations, by the shuffling of genes and all the processes associated with sexual reproduction and the generation of a fertilized egg. Contrary to what some mistaken interpretations of natural selection suggest, chance plays a much smaller role in the second step. However, the selection process is not entirely deterministic: there are no good genes that can help a kind of dinosaur exterminated by a giant meteor, to give an example. Put simply, evolution is the change in the time of the gene frequency. There are two main features that distinguish natural selection from the concept of "design". The first is that natural selection lacks an ultimate goal, a "Long-term strategic plan" That is not what one would expect from a master designer. The second is that, as natural selection is limited to operate on what already exists, the range of its possibilities is restricted. Natural selection begins with the modification of species that have already evolved to a certain state; it can not under any circumstances re-design them from scratch. It's like asking a tailor to make some modifications to an old dress instead of asking the Versace fashion house to design a new one. Consequently, natural selection leaves a lot to be desired in terms of design. Thus, although certain characteristics confer an advantage in terms of biological fitness, as long as there is no true inheritable variation that reaches that result, natural selection has no way of producing those characteristics. Imperfections are, in fact, the unmistakable mark of natural selection. It will be enough to draw attention to the following fact: the fossil record unmistakably reveals an evolution from simple life to complex life. Specifically, in the course of billions of years of geological time, the older the geological layer in which a fossil is discovered, the simpler the species. I have already mentioned one of the clues that point to the reality of natural selection: the resistance to drugs developed by various pathogens. In this case, the whole process of evolution has been compressed drastically over time, thanks to the fact that the generations of bacteria are very short and their populations enormous. It is difficult to find a better manifestation of natural selection in action. Another fascinating, though controversial, example of natural selection is the evolution of the birch butterfly. Before the industrial revolution, the light colors of this nocturnal moth gave it a good camouflage against the background of its habitat: lichens and trees. The industrial revolution in England brought with it very high levels of pollution that killed many lichens and darkened many trees with soot. As a result, white-bodied butterflies were suddenly exposed to great predation pressure, which almost led to extinction. At the same time, the melanic form of this butterfly, of dark color, began to prosper around 1848 thanks to its improved camouflage capacity. As if to demonstrate the importance of "green" policies, light-colored butterflies began to reappear as soon as environmental controls were adopted. Another common, and more philosophical, objection to natural selection is that Darwin's definition is circular or tautological. Put simply, this adverse judgment can be expressed as follows: natural selection means "survival of the fittest". But how are "the fittest" defined? As those who survive better; therefore, the definition is a tautology. This argument has its origin in a misunderstanding, and is absolutely wrong. Darwin did not use "the fittest" to refer to those who survive but those who, compared with other members of the species, had a greater hope of survival because they were better adapted to their environment. The interaction between a variable characteristic of an organism and the environment in which it lives is crucial in this definition. As organisms compete for limited resources, some survive and others do not. Surprisingly, even the philosopher of science Karl Popper himself raised a suspicion of tautology against evolution by natural selection. Popper basically questioned the explanatory power of natural selection based on the following argument: if a certain species exists, then it was adapted to its environment. In other words, what Popper said was that adaptation is defined simply as the quality that guarantees existence, and nothing is ruled out. However, since Popper published this argument, various philosophers have shown that it is incorrect. Actually, Darwin's theory of evolution rules out more possibilities than he allows. For example, according to Darwin, no new species can appear that does not have an ancestral species. Similarly, in Darwin's theory all variations that can not be achieved by gradual steps are discarded. In modern terminology, "reach" refers to processes governed by the laws of genetics and molecular biology. A crucial issue is the statistical nature of adaptation: predictions can not be made about individuals, only about probabilities. Two identical twins do not have to produce the same number of descendants, or even survive both. Finally, to avoid leaving any fringe, I must mention that although natural selection is the main engine of evolution, there are other processes that can also cause evolutionary changes. An example is what modern evolutionary biologists call genetic drift, a change caused by chance or a sampling effect, in the relative frequency with which a variant of a gene appears in a population. This effect can be significant in small populations, as the following example demonstrates. When a coin is thrown, what is expected is that it comes out more or less half the time. That means that if you flip a coin a million times, the number of times you get expensive will be close to half a million. However, if a coin is thrown only four times, there is a non-negligible probability that it will always be expensive, thus deviating from what is expected in a substantial way. Now imagine a large population of organisms on an island where only one gene appears with two variants: X or Z. Alleles occur in the population with the same frequency; that is, the frequency of X and Z is 1 2 for each of the alleles. But before these organisms have a chance to reproduce, a huge tsunami devastates the island and kills all but four individuals. Genetic drift can cause a relatively rapid evolution in the gene pool of a small population, and completely independent of natural selection. An example of genetic drift that is often cited concerns the Amish community of eastern Pennsylvania. Among the Amish, polydactyly is much more common than among the general population of the United States. This is one of the manifestations of the infrequent Ellis-van Creveld syndrome. The reason for the abnormally high frequency of these alleles among the Amish is that the members of this community tend to marry each other, and the population itself originated in a group of about two hundred German immigrants. There are three aspects of genetic drift that should be highlighted. The first is that the evolutionary changes that are due to genetic drift occur completely as a result of chance and sampling effects, that is, they are not driven by a selection pressure. Second, genetic drift does not produce adaptation, which remains a result of natural selection alone. Finally, while genetic drift clearly occurs to some extent in all populations, its effects are more pronounced in small and isolated populations. These are, very succinctly, some of the fundamental characteristics of the theory of evolution by natural selection enunciated by Darwin. This biologist revolutionized thinking in two main ways. But how much could we be wrong in supposing such a thing! It is curious that the theory of evolution constituted one of the most drastic revolutions in the history of science. The question, then, is: where did Darwin go wrong? To say that the laws of inheritance were "quite unknown" would probably constitute, of the whole book, the statement that most notoriously underestimated reality. Darwin had been educated in the then widespread belief that the characteristics of the two parents are physically mixed in their descendants, as in a mixture of paintings. According to this "paint boat theory", the hereditary contribution of each one of the ancestors would be reduced by half in each generation, and the descendants of any sexual pair were expected to present intermediate characteristics. As in a gin and tonic: if you mix the drink with more tonic, in the end you can not see the gin. Although Darwin evidently understood this inevitable dilution, somehow he still expected natural selection to work. Some of their children would probably inherit the same habits or structure, and by repeating this process, a new variety could be formed. " But Darwin could not think of the simple fact that this expectation was absolutely untenable under the assumption of a theory of inheritance by blending. The one who first drew attention to this inconsistency was the Scottish engineer Fleeming Jenkin. Jenkin was a multi-talented person with occupations as varied as painting portraits of passers-by and designing submarine cables for the telegraph. His criticism of Darwin was quite simple. Darwin can not be blamed for not knowing anything better than the scientifically accepted inheritance theory of his time. Consequently, I do not consider it an error to adopt the idea of ​​inheritance by mixing. Where Darwin erred was to completely overlook the fact that, under the assumption of mixed inheritance, his mechanism of natural selection simply could not function as he believed. Let us examine in greater detail this grave error and its potentially devastating consequences. Although the essay attacked the theory of evolution on several sides, I will focus on the argument that left Darwin's error uncovered. He then went on to comment on the case of an individual with a rare mutation that had the advantage of a twice-greater chance of surviving and reproducing than any other. To illustrate this effect of dilution, Jenkin chose a strikingly charged example of prejudice: that of a white man with superior features who would be shipwrecked on an island inhabited by black men. Davis showed that when the correction was made to maintain the population with an approximately constant size, the effect of the caprice did not fade, but was distributed throughout the population, although diluted. One after another, the generations would become clearer, but the shade of gray would never end. Before examining in depth the question of how Darwin could overlook this seemingly fatal flaw for his theory of natural selection, it will be useful to understand the theory of inheritance by blending from the perspective of modern genetics. Consequently, each individual possesses two copies of all their genes, and these two copies may be identical or may vary slightly. The different forms of a gene that can be present in a certain position of a chromosome are the variants that we know as alleles. To their surprise, the first generation of descendants only produced yellow seeds. However, in the next generation a 3: 1 ratio appeared between yellow seeds and green seeds. From these disconcerting results, Mendel had the ability to distill a theory of atomistic or particulate inheritance. In stark contrast to the theory of mixing, Mendel's theory said that genes were discrete entities that were not only preserved during development but also transmitted completely intact to the next generation. These deductions, like Mendel's own experiments, were absolutely great. Nobody had reached similar conclusions in almost ten thousand years of agriculture. The results of Mendel shattered the idea of ​​the mixture in one fell swoop, since already in the first generation of offspring, none of the seeds was an average of the two parents. A simple example will help to clarify the fundamental differences between Mendelian inheritance and inheritance by mixing in relation to its effects on natural selection. Although obviously the inheritance by mixing never used the concept of gene, we can use this language and still preserve the essence of the mixing process. Figure 4 If neither of the two genes dominated the other, both in mixed inheritance and in Mendelian inheritance, the children of one of these pairs would be gray, since they would have the combination of Aa genes. Now comes the fundamental difference. In the theory of the mixture, A and a would be mixed physically creating a new type of gene that confers the gray color to those who wear it. We call this new gene A. This type of mixture would not occur in Mendelian inheritance, where each gene would retain its identity. In inheritance by mixing, on the other hand, variation is inevitably lost, since all extreme types quickly vanish in a sort of average. As Jenkin correctly pointed out, and as the following example clearly demonstrates, this feature of mixed inheritance was catastrophic to Darwin's ideas about natural selection. Suppose we start with a population of ten individuals. Nine have the combination of genes aa and one has the combination Aa, which turns it into gray. Suppose further that being black is advantageous for survival and reproduction, and that even having a slightly darker color is better than being completely white. Figure 5 attempts to follow schematically the evolution of a population of these characteristics if the inheritance is by mixture. If black confers an advantage in that environment, given enough time natural selection could lead to the entire population being black. The conclusion is simple: if Darwin's theory of evolution really works through natural selection, he needs the Mendelian inheritance. But at a time when this genetic knowledge had not yet been discovered, how did Darwin respond to Jenkin's criticism? What kills some, others, healthy Darwin was a genius in many ways, but decidedly he was not a good mathematician. The work disgusted me, especially since I was not able to see any meaning in the first steps of algebra. I do not think I would ever have reached the lower levels. " This being the case, the arguments of El origen tend to be more qualitative than quantitative, especially when it comes to the production of evolutionary change. In the few places in El origen, where Darwin tries to make a few calculations, in more than one occasion he ends up messing up. Even so, it is almost unthinkable that Darwin was not at all aware of the potential dilution effect of the inheritance by mixing until the moment he read Jenkin's article. To a certain extent, Darwin even counted on the effect of dilution to guarantee population integrity against the tendency of individuals to deviate from their type because of variations. So how could he not understand how difficult it would be for a "whim" to overcome the equalizing force of the mixture? The latter may have been partly a consequence of his general theory of reproduction and development, in which he assumed that only the pressures during development triggered variations. Darwin's bewilderment with the inheritance was much deeper, as can be seen in the following incoherence. This idea of ​​a latent "tendency" was manifestly removed from the usual inheritance by admixture, and in various ways was closer in essence to the Mendelian inheritance. However, it does not seem that, at least in the beginning, Darwin would think of appealing to this idea of ​​latency in his attempts to respond to Jenkin. Instead, Darwin decided to shift the emphasis from the role he had previously assigned to the singular variations to place it on individual differences, as a source of "raw material" on which natural selection would act. In other words, Darwin now relied on a whole continuum of variations to produce evolution by natural selection over many generations. Fleeming Jenkin's arguments have convinced me ». He also added some new paragraphs to the fifth edition, two of which, in particular, are of enormous interest. In the other paragraph, Darwin presented his own concise summary of Jenkin's dilution argument. This paragraph is fascinating because it contains two seemingly small but very significant differences with respect to Jenkin's original text. In the first place, Darwin assumes that a pair of animals has two hundred children, of which two survive until they reproduce. In the second place, and this is the most curious thing, Darwin supposes in his summary that only half of the children of "caprice" inherit the favorable variation. But this prediction is contrary to the predictions of the theory of mixing! Regrettably, Darwin did not yet know how to develop the consequences of a theory of unmixed inheritance, and accepted Jenkin's conclusions without further discussion. There are, however, quite a few indications that Darwin was not satisfied with the long-mixed inheritance theory. I can not understand otherwise that the resulting forms of the crosses revert so often to ancestral forms. But all this, of course, is infinitely crude. Rough or not, his observation was extraordinarily insightful. Darwin recognized here that the combination of maternal and paternal hereditary material was more like the result of shuffling two sets of cards than mixing two paintings. Although the ideas expressed by Darwin in this letter can definitely be considered an impressive predecessor of Mendelian genetics, Darwin's frustration with inheritance by blending eventually led him to develop a completely erroneous theory known as pangenesis. In Darwin's pangenesis, the whole body was supposed to send instructions to the reproductive cells. Therefore, strictly speaking, it is not the reproductive elements that generate new organisms, but the cells themselves throughout the body. Unfortunately, pangenesis led to inheritance precisely in the opposite direction that would soon be carried by modern genetics: it is the fertilized egg that gives the instructions for the development of the whole body, and not the other way around. Confused, Darwin clung to his mistaken theory with a conviction similar to what he had previously shown by clinging to his correct theory of natural selection. This is a perfect example of a brilliant idea that failed miserably because it had been associated with the wrong mechanism for its implementation: pangenesis. Nowhere did Darwin articulate his atomistic, essentially Mendelian ideas more clearly than in an exchange with Wallace in 1866. This letter is remarkable in two respects. Darwin himself came pretty close to discovering the Mendelian reason of 3: 1. After crossing the dragon's mouth with its peloric form, the first generation of the offspring was composed only of the common type, while in the second there appeared eighty-eight common and thirty-seven pelóricas. In the second place, Darwin points out the obvious fact that the simple observation that all descendants are male or female, and not some intermediate hermaphrodite, in and of itself, is an argument against mixing the "paint pot"! So Darwin had the proof before his eyes of the correct form of inheritance. As I had already pointed out in The Origin: "The slight degree of variability of the hybrids of the first crossing or of the first generation, in contrast to their extreme variability in later generations, is a curious fact that deserves attention." In any case, although Darwin became chillingly close to Mendel's discovery, he failed to comprehend its enormous generality and failed to recognize its vital importance for natural selection. To finish understanding Darwin's attitude towards the particulate inheritance, there are a few disturbing questions that must be answered. Obviously, if this last statement were found to be true, it would mean that Darwin was fully aware of Mendel's work. Apparently, Mendel's name does not appear even once in Darwin's entire list of books and articles. However, two of the books Darwin owned did refer to Mendel's works. Figure 7 shows the title page, in which Darwin wrote his name. As I have seen with my own eyes, this book had an even less distinguished destiny: the precise pages in which Mendel's work is described are still not cut in the copy of Darwin's book! Figure 8 shows an image of Darwin's copy, made at my request, where you can see the uncut pages. Sclater left no doubt in his answer: no way. Mendel had the second German edition of The Origin, which was published in 1863. In his copy, he highlighted certain passages with lines in the margin, and others underlining parts of the text. Mendel's brands show great interest in issues such as the sudden appearance of new varieties, artificial and natural selection, and differences between species. As is well known, it is not like that, because precisely among these plants are not only the most varied but also the most variable. It is precisely in this aspect where inheritance by mixture failed, as Jenkin had pointed out. In other words, inherited variation and no mixing. In addition, Mendel tried several times to create variations of plants by removing them from their natural habitat and bringing them to their garden in the monastery. Mendel accepted, therefore, at least some parts of the theory of evolution. To answer this question we have to understand the special historical circumstances that surrounded Mendel. In this oppressive atmosphere, Mendel, who had been ordained a priest in 1847 and elected abbot of the monastery in 1868, probably did not think it prudent to express any explicit support for Darwin's ideas. We can still wonder what would have happened if Darwin had read Mendel's article before November 21, 1866, when he finished writing his chapter on misguided pangenesis. Of course, we will never know, but I think it would not have changed anything. Neither Darwin was prepared to think in terms of variation that affected only one part of an organism and not the others, nor was he sufficiently skilled in mathematics to understand and value Mendel's probabilistic approach in its proper measure. Developing a specific and universal mechanism from a few isolated cases of a 3: 1 ratio in the transmission of some properties of a given plant was not Darwin's strong. In addition, Darwin's stubborn defense of his theory of pangenesis suggests that at that time in his life he may be affected by what modern psychologists call the illusion of trust, a common state in which people overestimate their abilities. Although in principle it applies to people who lack a skill but are not aware of it, at some level it can affect anyone. There are studies that show, for example, that most chess players believe they can play much better than their position in the standings indicates. formal. If Darwin had really suffered an illusion of trust, it would be quite ironic, for he himself had once had the insight to observe that "ignorance breeds trust more frequently than knowledge." It took seventy years to solve the complexities involved in the development of a quantitative approach to the phenomena of variation and survival rates, as well as the full integration of Darwinian selection and Mendelian genetics. Geneticists then argued that mutations, which were the only acceptable form of heritable variation, were abrupt and directly produced the final change, rather than being gradually selected. This opposition began to dissipate towards the 1920s as a result of several research projects that opened new paths. All these studies, and others similar, showed that mutations occurred infrequently and that in most cases they were disadvantageous. On the rare occasions when advantageous mutations appeared, natural selection was identified as the only mechanism that could allow it to spread through the population. In addition, biologists were gradually realizing that the independent performance of a large number of genes can result in the continuous variation of a characteristic. Darwin's gradualism, by which natural selection produced adaptation by acting on minute differences, eventually prevailed. This was the work that provided the definitive proof that Mendelian genetics and Darwinian selection were complementary and mutually indispensable. Bearing in mind that Darwin was wrong in the fundamental question of genetics, it is absolutely prodigious how much he did right. Over time, all these intertwined threads united in a single conclusion: to understand life it is necessary to understand some tremendously intricate chemical processes involving some very complex molecules. I mentioned earlier that Jenkin's article raised some other objections to Darwin's theory of evolution. The controversy that followed will allow us to make some fascinating insights not only about the differences between the methodologies used in different branches of science, but also about how the human mind works. Chapter 4 How old is Earth? The concept of a universal and linear time did not arise immediately. In the Western tradition, Plato and Aristotle were more concerned with the why and how of the order of nature than when, but even they played with the idea of ​​recurrent cycles, in tune with celestial movements. In this religious context, the determinations of the age of the Earth had been the prerogative of theologians for centuries. His motivation for calculating age, he said, was not "to provide mere matter for much discussion" but "to shed light on the number of years since the foundation of the world." Although Theophilus admitted a certain margin of error in his calculations, he did not think that it was more than 200 years old. Many of the chronologists who followed tended to simply add the time intervals between biblical events indicated, the ages of death of certain individuals according to the Scriptures or the duration of generations. Ussher's calculations were somewhat more sophisticated, since he supplemented the biblical accounts with some astronomical and historical data. This particular date became well known in the Anglo-Saxon world because it was added in a side note to the English Bible of 1701. Naturally, the Christian vision of the time closely followed the Jewish tradition, which in turn was based on all in a literal reading of the story of the book of Genesis. In the context of a divine drama in which the Jewish people supposedly played the leading role, to have a History was clearly crucial. According to this inheritance, the world would have been created about 5773 years ago. In fact, Maimonides had not even been the first to suggest that the passages of Genesis had only allegorical intent. But the Sun is part of the celestial vault, so that time must be recognized as something after the cosmos. The great German philosopher Immanuel Kant was one of the first to judge critically the balance between biblical interpretation and the laws of physical science. Kant decided decisively for physics. In 1754 he drew attention to the danger of relying on the time of a human life to estimate the age of the Earth. This is what he wrote: "Man makes the greatest mistake when he tries to use the sequence of human generations that have elapsed in a given time as a measure of the age of the greatness of God's work." On the contrary, it inferred a long history of gradual geological processes. The work was written as a series of fictitious conversations between an Indian philosopher and a French missionary. In modern terms, it was a theory of what we know today as sedimentation. Strictly speaking, De Maillet's calculations, as well as the theory on which they are based, were erroneous for several reasons. Secondly, his knowledge of rock formation was quite deficient, and further weakened his argument with occasional foray into fantasy. For example, to support his claim that all forms of life came from the sea, De Maillet relied on stories about mermaids and men with tails. For the first time the age of the Earth was not measured in relation to human life but in relation to the rhythm of natural processes. Today we can see that De Maillet's work was more than an "extravagance": it contained the seeds of geochronology. Determining the age of the Earth with scientific methods was about to become a worthy challenge for science. Buffon was a truly prolific character who was not only an accomplished scientist but also a successful businessman. Perhaps he is known above all for the clarity and persuasion with which he presented a new method for studying nature. Buffon's goal was to systematically deal with topics ranging from the solar system, the Earth and the human race to the different realms of living beings. Then, in the purest style of an experimenter, he was not satisfied with a purely theoretical scenario and proceeded immediately to make spheres of different diameters and accurately measure the time it took to cool. From these experiments, he estimated that the terrestrial globe had solidified in 2905 years and it had taken 74 832 years to cool to its present temperature, although he suspected that the cooling time had to be much longer. In the end, however, it was not pure Newtonian physics that drew attention to the problem of Earth's age. In view of the increasing difficulty of trying to embed the entire history of the Earth in the few thousand biblical years, some of the most religious naturalists chose to resort to catastrophes, for example floods, as agents of rapid change. If large periods of time were to be denied, catastrophes were presented as the only vehicle that could shape the surface of the Earth appreciably almost instantaneously. Richard Kirwan, one of the best-known chemists of the time, articulated this position very clearly. Kirwan confronted Hutton directly with Moses by describing how much he was concerned to observe "how fatal the suspicion of the great antiquity of the globe has been for the credit to Mosaic history, and consequently for religion and morality." Lyell argued that the forces that sculpted the Earth had remained essentially immutable throughout history of the Earth, both in its strength and in its nature. This was the idea of ​​uniformitarianism that inspired Darwin's concept of gradualism in the evolution of species. The basic premise was simple: if there was something all these slow-acting geological forces required to have an appreciable effect, it was time. The followers of Lyell almost abandon the idea of ​​a definite age for the Earth in favor of a vague notion of a time "inconceivably dilated". This principle contrasted sharply with the theological estimates of about six thousand years. Darwin imagined evolution as a long sequence of phases, each lasting perhaps ten million years. There was, however, an important difference between Darwin's position and that of geologists. However, a controversy was beginning to take shape. For being a simple gentleman, he invented Sir William Thomson's sailor's compass, as well as a navigational sounding machine, which sadly is less known. He has also made great applications of electricity at sea: as an engineer of several Atlantic cables, as inventor of the mirror galvanometer and the siphon recorder, and of much more that is not only scientific but also useful. A great scientist, honest and humble, that is much what he has written and even more what he has done. This is a fairly accurate, though sometimes hilarious, description of the many achievements of a man whom one of his biographers dubbed the "dynamic Victorian." There is no doubt that Kelvin was the most prominent figure of the time who witnessed the end of classical physics and the birth of the modern era. Figure 9 shows a portrait of Lord Kelvin, possibly made from a photograph taken in 1876. What the panegyric did not reflect, however, was how in his later years Kelvin's prestige in scientific circles had collapsed. In his old age, Kelvin gained a reputation as an obstructionist of modern physics. He is often portrayed as a person clinging obstinately to his outdated ideas, who resisted the latest findings about atoms and radioactivity. Despite being a person with extensive technological knowledge, Kelvin made equally surprising statements about technology, for example: "I have no faith in air navigation beyond balloons." It was this enigmatic man, a brilliant scientist as a young man who had lost contact with science when he was old, who tried to discredit the ideas of geologists about the age of the Earth. This article followed the thread of another recent article, published only one month before with the title "On the age of the Sun's heat". Thomson made it clear from the first sentence that this was not going to be a technical essay easy to forget. Although the "restless" is a small exaggeration, it is true that the first articles of Kelvin on the issue of the conduction and distribution of heat by the body of the Earth had written much earlier, in 1844 and 1846, respectively. Even before he was seventeen, Thomson had managed to detect an error in an article on heat written by an Edinburgh professor. Consequently, Kelvin argued, unless it could be demonstrated that there were internal or external sources of energy to compensate for heat losses, it was clear that no steady state or repetition of identical geological cycles was possible. Plainly speaking, Lyell imagined that chemical reactions generated heat, which impelled electrical currents that in turn dissociated the chemical compounds in their original constituents, and in this way the whole process began again. Kelvin could hardly hide his disdain. He demonstrated very clearly that such a process amounted to a kind of perpetual motion machine, violating the principle of dissipation of energy: when the mechanical energy is irreversibly transformed into heat, as in the case of friction. Lyell's mechanism violated the basic laws of thermodynamics. In the most basic, the Kelvin calculation of the Earth's age was simple. Since the Earth was cooling, the science of thermodynamics could be used to calculate the finite geological age of our planet: the time it would have taken to reach its current state since the formation of the solid crust. Kelvin understood the potential of the theory and in 1849 he devoted himself to making a series of measurements of underground temperatures, and in 1855 he vehemently proposed that a full geothermal prospecting be made, precisely so that the age of the Earth could be calculated. Kelvin thought he had good estimates of two of these quantities. The measurements made by various geologists had shown that, although the results varied from one place to another, on average the temperature increased by 3 ° C for every 100 meters of descent. In terms of thermal conductivity, Kelvin relied on his own measurements of two types of rock and sand, obtaining what he thought was an acceptable average. The third physical quantity, the temperature of the deep interior of the Earth, was extremely problematic, since it could not be measured directly. But Kelvin was not a man who allowed himself to be easily overcome by difficulties. He put his analytical mind to work and finally managed to deduce an estimate of the unknown internal temperature. The intellectual contortions he had to perform to reach this result showed the best Kelvin, but also the worst. On the one hand, his virtuous mastery of physics and his ability to examine potential alternatives with razor-sharp logic were unparalleled. On the other hand, as we will see in the next chapter, sometimes his overconfidence led him to fall assaulted by surprise by possibilities he had not contemplated. Kelvin began to attack the problem of the internal temperature of our planet analyzing several possible models of the cooling of the Earth. The subsequent evolution of that molten sphere depended on a property of the rocks that was not known with certainty: if upon solidification, the molten rock expanded or contracted. In the first case, one could expect the solid crust to float on the liquid interior, just as ice floats on the surface of the lakes in winter. Although empirical observations were scarce, experiments with granite, shale, and trachyte molten rock seemed to indicate that the molten rock contracted upon cooling and solidification. Kelvin used this information to outline a new possibility. He proposed that before the complete solidification took place, the coldest liquid on the surface would have sunk towards the center, thus maintaining convection currents similar to those generated in the oil of a pan. In this model, it was assumed that convection maintained an almost uniform temperature at all depths. Consequently, Kelvin assumed that at the solidification point, the temperature was everywhere around the melting temperature of the rocks, and this is the temperature that he also attributed to the interior of the Earth. This model implied that the Earth was almost homogeneous in its physical properties. Unfortunately, even this ingenious trick did not solve the problem, since in Kelvin's time the value of the melting temperature of the rock was not known. Thus, he was forced to make the best conjecture that allowed the available knowledge, finally adopting a range of acceptable values ​​between 3800 and 5500 ºC. Putting all this information together, Kelvin finally calculated an age for the Earth's crust: ninety-eight million years. In many ways, despite the insecurity of the assumptions, It was a really brilliant calculation. Kelvin tackled a seemingly unsolvable problem and deciphered it. He used solid physical principles both in the formulation of the problem and in his method of calculation, and all this with the best quantitative measurements available in his time. Compared to their determination, the geologists' estimates ranged from crude guesses and idle speculations based on poorly understood processes such as erosion and sedimentation. The number that Kelvin produced was very consistent with a previous estimate he had made of the age of the Sun. This was important, for even some of Kelvin's contemporaries realized that the force of his argument about the age of the Earth derived at least part of the credibility he had gained with his solar calculations. The key assumption in both cases was that the only source of energy available to the Sun was mechanical gravitational energy. After weaving a coherent story with all these threads, Kelvin managed to obtain an approximate estimate of the age of the Sun. As for the future, we can say, with equal certainty, that the inhabitants of Earth will not be able to continue enjoying the light and heat essential for your life for many millions of years unless there are other sources in the great store of creation that we do not know today. As I will describe in the next chapter, the last sentence turned out to be entirely clairvoyant. Even so, there were few British geologists who were not convinced. He replied that he could not suggest a limit. You can count the tons. I replied: "You can perfectly understand the reasoning of physicists if you pay attention to him." Leaving aside for a moment the question of how strong were his physical assumptions and the mathematical details of his calculations, Kelvin's main conclusion was accessible. A warmer sun would have caused more evaporation, and consequently a higher rate of erosion by precipitation. At the same time, a warmer Earth would have experienced more volcanic activity. Therefore, Kelvin concluded, the uniformitarianist assumption of an Earth in an almost indefinite stationary state did not hold. It is not, in fact, reasonable to assume that such marks can exist anywhere. The Author of nature has not endowed the universe with laws that, like the institutions of men, carry within themselves the elements of their own destruction. He has not allowed in His work any symptom of childhood or old age, nor any sign by which we can estimate its past or future duration. Kelvin's reaction to this fragment was ruthless. One would say: "No; that sandstone has been in the fire, where it was heated not many hours ago ». He came up with a third line of argument based on the rotation of the Earth around its axis. The concept itself was ingenious and easy to understand. An initially molten Earth would have acquired, because of the rotation, a slightly ellipsoidal shape, more flattened by the poles and more bulky by the equator. The larger the initial rotation had been, the less spherical the resulting shape would have been. This form, according to Kelvin, would have been preserved after the solidification of the Earth. Therefore, precise measurements of the sphericity deviation could be used to determine the initial rotation rate. Although the idea was fascinating, making it a value for the age of the Earth was extremely difficult. However, Kelvin thought that the mere fact that you could put a limit on the age of the Earth, no matter how much uncertainty it had, was enough to refute the uniformitarianist idea of ​​an inconceivably long time. To Kelvin's disappointment, the estimate based on the rate of rotation of the Earth did not last long, at least not in a quantitative way. George was a physicist of considerable mathematical ability, and attacked the problem of the rotation of the Earth around its axis with infinite patience and attention to detail. This was a consequence of the fact that even a solidified Earth was not completely rigid. Darwin showed that given the many uncertainties about the interior of the Earth, there was no reliable way to calculate the age of the planet from its rotation rate. However, Darwin's work was revealing in another sense: it showed that not even the august Lord Kelvin was infallible. As we will see in the next chapter, this could have helped open the door to other criticisms. Deep impact Describing the controversy over the age of the Earth as a battle to the death between physics and geology would be a mistake. Victorian scientists freely attended meetings of societies that formally represented other branches of science. More than a dispute between disciplines, the debate about the age of the Earth was fundamentally a confrontation between Kelvin and the doctrine of some geologists. One may wonder what motivated Kelvin to examine this problem. One thing must be made clear: Kelvin did not object to the theory of evolution per se. However, he rejected natural selection outright because "I had always had the impression that this hypothesis does not contain the true theory of evolution, if there has been evolution, in biology." In fact, Kelvin argued that the laws of thermodynamics themselves were part of that universal design. Still, we should remember that even if Kelvin felt somehow emotionally attached to the concept of "design," there is no doubt that his fierce criticisms of the geologists' practices were based entirely on physics, not on their religious beliefs. Until the 1860s, geologists were much busier in discussions about whether the interior of the Earth was solid or fluid than preoccupied with the chronology of the Earth. By the mid-1860s, however, a considerable number of the most influential geologists began to really pay attention to what Kelvin had said. Based on studies on sediments, Phillips himself had suggested in 1860 an age for Earth of about ninety-six million years. By 1865, he was already publicly supporting Kelvin. Often one can judge whether a scientific theory had an impact because of the vehemence with which objections are announced by heavyweights who have something to lose. Huxley had earned the nickname "Darwin's bulldog" for his energetic support of the theory of evolution and the enthusiasm with which he defended it in the debates. Huxley loved the controversy as much as Darwin hated it. The story was told with all the luxury of juicy details, some probably imaginary, in the October 1898 issue of Macmillan's magazine. Then he turned to his antagonist and with an insolent smile begged him to clarify whether when he said that descended from a monkey referred to the lineage of his grandmother or his grandfather. Mr. Huxley rose slowly but deliberately. He was not ashamed to have a monkey as an ancestor; but he would be ashamed to be connected with a man who used his great gifts to obscure the truth. Nobody doubted its meaning and its effect was formidable. A lady fainted and had to be taken out of the room. Although there are many versions of the exact words of this improvised dialogue, Huxley's oratory skills and the growing feeling against the intrusion of men of the Church in the affairs of science have helped to grow the legend. Finally, after a few more, debatable but eloquent statements, Huxley delivered his own summary that "the case has completely collapsed." Sadly the reality is different, and even the fact that Kelvin felt as comfortable in the academic world as in the technical one was not enough. Kelvin's name does not appeared on either of the two lists. At least one of the reasons for the subsequent deterioration of Kelvin's status has to do with the debate about the age of the Earth: today we know that the age of our planet is about 4540 million years. Few discussed the assessment that, in any case, Kelvin's position had been reinforced to some extent by this dialectical war. Huxley, however, raised an issue that turned out to be particularly perceptive. The truth is that Kelvin had a mastery of mathematics so exceptional that it was virtually guaranteed that if he had made an error, it would not have been in the calculations themselves. It was the set of assumptions that fed his calculations that had to be thoroughly inspected. Although most of Perry's scientific output focused on electrical engineering and applied physics, today he is probably best known for his brief foray into geology. Salisbury then used Kelvin's estimate of the age of the Earth to argue that evolution by means of natural selection could not have occurred. But as often happens when the messages are too dogmatic, his speech had just the opposite effect to the one he intended, at least in John Perry. Impressed by the accumulation of geological and paleontological data, Perry wrote to a physical friend that "when I knew that an error must have been made, his discovery was no longer a matter of luck". Perry finalized the first version of his investigation on the problem of the cooling of the Earth the 12 of October, and throughout the following weeks it was dedicated to send the article him to several physicists so that they sent him commentaries. Although half a dozen physicists expressed support for Perry's conclusions, Kelvin did not even bother to answer. The chance to talk to him face to face was too good to let her lose. The next day I emotionally described the event to a physical friend: Last night I sat next to him in Trinity and he had to listen to me. I knew in advance that I would not have read my documents, and that was the case, but I gave him a lot to think about and the condescending smile that led to my ignorance got blurred in just 15 minutes. I think that now it will take care of the matter. Geikie was in front of her, and her eyes sparkled with pleasure. The scientific journal Nature finally published Perry's article on January 3, 1895. Perry then expresses his personal reservations about the methodology used in the geology of his time: "I was very displeased to take into account a quantitative problem posed by a geologist. Almost always, the conditions provided are too vague for the question to be in any way satisfactory, and a geologist does not seem to mind too many millions of years in terms of time. " Perry focused his attention mostly on one of Kelvin's fundamental assumptions: that the conductivity of the Earth was the same at any depth. In other words, Kelvin assumed that heat was transported with uniform efficiency, either a kilometer deep or a thousand kilometers. Kelvin's calculation showed that if the Earth were more than one hundred million years old, the temperature would increase with depth more slowly than what was observed, because the cooled skin would be thicker. Perry wondered: what if instead of being the same everywhere, heat transport was more efficient in the interior than near the surface? Clearly, in that case the base of the outer skin of the Earth would stay hot for much longer. In a disdainful letter to the insult, Tait wrote to Perry on November 22, 1894: my absolute failure to understand the purpose of his article. Well, I seem to remember that you have no objections against Lord's mathematics Kelvin. I do not think Lord Kelvin bothers to prove that. It seems that Tait did not understand the message at all. Since nobody at that time could say with any type of certainty what the conditions of the interior of the Earth really were, what one supposed for the purpose of making the calculations was pure conjecture. Kelvin's mistake was that he did not realize that the margins allowed by existing observations could introduce into his estimate of age a much greater uncertainty than he was willing to accept. There is no doubt that Lord Kelvin's assertion falters as soon as it is proved that there are other possible conditions inside the Earth that many times throw off the age that is your limit and his. " The least they demand is a trillion, and that's only for part of the secondary period! " In the absence of definitive experimental data on the precise internal conditions of the Earth, the fact that he was able to prove that Kelvin could be wrong by a considerable factor was more than enough. When he finally decided to respond, Kelvin was much less aggressive than Tait. Maybe at no other time did Kelvin show so much respect for opinions that contradicted his. Most likely, this magnanimity expressed his sense of obligation to show empathy with a former student. However, he hastened to insist that his estimate of the Sun's age still "denied sunlight for more than twenty million years, perhaps a few score, of past time." As we will see later in this chapter, Kelvin had no reason to revise his calculation of the age of the Sun. Perry's challenge caused Kelvin to spend the next couple of months doing experiments in which he heated basalt, marble, halite and quartz. His experiments seemed to show, in accordance with recent results of the Swiss geologist Robert Weber, that when the temperature increased, the conductivity did not change much or even reduce slightly. Kelvin declared triumphantly that "nothing caused him to differ too much from his estimate of 24 million years." Perry, however, was not convinced. Perhaps the measurements of the conductivity of the heated rocks had refuted one of the ways in which heat could be transported more easily to greater depths, but other possibilities were open. In particular, convection in a mass of fluid character was an attractive alternative. Perry's intuition turned out to be visionary. Understanding that convection was possible even within what appeared to be a fairly solid mantle played an important role in the eventual acceptance of the idea of ​​plate tectonics and the drift of continents. Not only heat can be transported by fluid movement but, for long periods of time, entire continents can be moved horizontally. The precise conditions at the boundary between the inner core of the Earth and the outermost part remain a hot topic of investigation even today. I have shown that we have reason to believe that age, in all three cases, can be seriously underestimated. There was, however, another crucial hypothesis in Kelvin's estimate of the age of the Earth: that there was no unknown source of internal or external energy that could compensate for heat losses. The phenomenon became known as radioactivity. In Wilson's estimate, "3.6 grams of radius per cubic meter of the Sun's volume would suffice to supply its total energy output." That was precisely what, as Perry had shown, it was necessary to increase the estimates of age. In other words, in the scenario proposed by Kelvin, the Earth limited itself to losing the heat of its original reserves, but the discovery of a new source of internal heat seemed to undermine the foundations of this scheme. For his part, Kelvin showed great interest in discoveries related to radio and radioactivity, but he remained firm in his belief that they would not alter his age estimates. In other words, Kelvin proposed that atoms simply pick up energy from the ether, only to release it again at the moment of disintegration. However, demonstrating considerable intellectual courage, he abandoned this idea at the 1904 congress of the British Association, although he never published a retraction in a printed medium. Unfortunately, for some unclear reason, in 1906 he again lost contact with the rest of the physical community by rejecting the idea that radioactive decay transmuted one element into another, although Rutherford and others had accumulated strong experimental evidence of this phenomenon. Even before this altercation, in a book he had published in 1904, Soddy had not hesitated to state firmly that "the limitations with respect to past history and the future of the universe have been greatly extended." Rutherford was somewhat more generous. I was relieved to see that he soon fell asleep, but just when he reached the important point, I saw how the old man sat up in his seat, opened one eye and gave me a grim look! Then the inspiration came, and I said that Lord Kelvin had limited the age of the Earth, as long as no new sources of heat were discovered. That prophetic statement refers us to what we are discussing today, the radio! The old man gave me a smile. Over time, radiometric dating became one of the most reliable techniques for determining the age of minerals, rocks and other geological bodies, including Earth itself. The series of disintegrations continues until a stable element is reached. By measuring and comparing the relative abundances of the radioactive isotopes that occur in nature and all their decay products, and combining these data with known half-lives, geologists have been able to determine the age of the Earth with great precision. Adams replied that different methods had yielded an estimate of one hundred million years. Rutherford then commented calmly: "I know that this piece of pitchblende is 700 million years old." If that were the whole truth, Kelvin's blunder would not have figured in this book of errors, since Kelvin could not have taken into consideration an energy source that had not yet been discovered. The truth is that it is wrong to attribute completely the incorrect determination of age to radioactivity. But not all that heat flows easily. A meticulous examination of the problem reveals that, given Kelvin's assumptions, if it had included radioactive heating, it really should have considered only the heat generated in the first 100 kilometers of Earth's outer crust. The reason for this is that Kelvin showed that only the heat of those depths could be effectively extracted by conduction in about a hundred million years. This was the true source of his unacceptably low estimate of age. On the sensation of knowledge As we can neither interview Kelvin nor capture images of the functioning of his brain, we will never know for sure the exact reasons for his wrong obstinacy. What we do know, of course, is that people who spend a good part of their professional lives defending certain propositions do not like to admit that they were wrong. But should not Kelvin, as great a scientist as he was, have been different? Luckily, modern psychology and neuroscience begin to shed some light on what has been called "sensation of knowledge," which almost certainly affected part of Kelvin's thought. First, it should be noted that in his approach to science and his crusade by knowledge, Kelvin was more akin to an engineer than a philosopher. Thus, at a very basic level, Kelvin's error was a consequence of his belief that he could always determine what was probable, without realizing the omnipresent danger of overlooking some possibilities. The theory of cognitive dissonance, originally developed by psychologist Leon Festinger, deals precisely with those feelings of discomfort that people experience when they are shown information that is not consistent with their own beliefs. The messianic current of the Hasidic Jewish movement known as Chabad provides an excellent, albeit esoteric, example of this process of reorientation. After the rabbi suffered a stroke in 1992, many of the faithful followers of the Chabad movement were convinced that he would not die, but would "rise up" as the Messiah. In that study, 225 female first-year university students were first asked to order eight items manufactured according to a scale of attractiveness or desirability between 1.0 and 8.0. In the second phase, the students were allowed to choose as a gift one of two articles from the eight that had been shown to them. Then a second round of scoring of the eight items was made. In other words, things seem better after having chosen them, a conclusion corroborated later by neuroimaging studies that show an increase in activity in the caudate nucleus, a region of the brain involved in "feeling good". The case of Kelvin fits like a glove to the theory of cognitive dissonance. After repeating his arguments about the age of the Earth for more than three decades, it was unlikely that Kelvin would change his mind just because someone suggested the possibility of convection. As you will remember, Perry was not able to prove that convection took place, even if it was probable. Perhaps the answer can be found in the way the reward circuits of the brain work. What they discovered was that the rats pressed the lever that activated the electrodes placed in those points that induced pleasure more than six thousand times per hour! During the past two decades, neuroscientists have developed sophisticated imaging techniques that allow them to see in detail what parts of the human brain are lit in response to pleasurable tastes, music, sex or to win a gambling bet. Several studies have shown that an important part of the reward circuits is comprised of a set of neurons that originate near the base of the brain and communicate with the nucleus accumbens, a region below the frontal cortex. Other areas of the brain provide the emotional contents and relate the experience to the memories, and to the unleashing of responses. The hippocampus, for example, "takes notes" while the amygdala "scores" the implied pleasure. So, in what way is all this related to intellectual endeavors? To embark on a relatively long thought process, and to persist in it, the brain needs at least some promise of pleasure along the way. Whether it is the Nobel Prize or the envy of the neighbors, a pay raise or the simple satisfaction of completing a Sudoku described as "diabolical", the nucleus accumbens of our brain needs some dose of reward to keep going. In the case of drug addicts, they need more drugs to obtain the same effect. In the case of intellectual activities, it can give rise to an increased need for being right all the time and, concomitantly, a growing difficulty in admitting mistakes. The neuroscientist and writer Robert Burton specifically suggested that the insistence on being right could keep a psychological similarity with other addictions. If true, Kelvin would definitely fit in with the profile of an addict to the sensation of being right. Almost half a century of what he undoubtedly saw as a series of victorious battles with geologists must have strengthened his convictions to the point where those neural links could no longer be dissolved. In other words, reasoned reasoning is regulated by emotions, not by dispassionate analysis, and its purpose is to minimize threats to the self. It is not inconceivable that at the end of his life, Kelvin's "emotional mind" would occasionally overwhelm his "rational mind." The reader will remember that I have previously referred to Kelvin's calculation of the age of the Earth. I do not consider that estimate to be a blunder. After all, his estimate of less than a million years was wrong in the same magnitude as his value for the age of the Earth. A comment without doubt insightful. With the discovery of radioactivity, many assumed that it would end up proving that the radioactive release of heat was the real source of the Sun's energy. Even under the extravagant assumption that the entire Sun was composed of uranium and its decay products radioactive, the energy generated would not have been sufficient to explain the observed luminosity of the Sun. But it was not possible to answer the question of the age of the Sun until a few decades later. Finally, in the 1940s, astrophysicist Fred Hoyle proposed that the fusion reactions of the stellar nuclei could synthesize the nuclei from carbon to iron. Even though Kelvin's calculation of the age of the Earth was a mistake, it still seems absolutely great. Kelvin had completely transformed the geochronology of a vague speculation into a science of truth, based on the laws of physics. His pioneering work initiated a vital dialogue between geologists and physicists, a dialogue that remained open until the discrepancy was resolved. At the same time, Kelvin's parallel investigations on the age of the Sun clearly pointed to the need to identify new sources of energy. From our perspective as humans, one of the fundamental benefits of the Earth having enjoyed 4500 million years of energy from the Sun has been the emergence of complex life. Together, these two molecules contain all the information necessary for an apple tree, a snake, a woman or a man to work. But also during these discoveries great errors occurred. Rumor had it that the famous chemist Linus Pauling was about to reveal something really amazing, maybe even the solution to one of the great mysteries of life. When he finally arrived, one of his research assistants brought with him an object that looked like a large sculpture covered by a cloth and tied with a string. The lecture once again demonstrated Pauling's virtuous mastery of chemistry and his exquisite ability to perform. We will return to this exciting story later in this chapter. Life's article was no more than a brief summary, in popular language, of what had been a year of miracles in Pauling's long career. That number marked the culmination of Pauling's fifteen years of cutting-edge research. The path to the alpha helix Pauling began thinking about proteins in the 1930s. His first articles on the subject proposed a theory for hemoglobin in which he suggested that each of the four iron atoms in the molecule established a link chemical with an oxygen molecule. While working on the subject, Pauling pioneered a new experimental technique. That method turned out to be a great tool for structural chemistry. Pauling knew how to take advantage of the magnetic characteristics; for example, to determine the rates of various chemical reactions. This casual collaboration between the two scientists ended up being the starting point of a hugely successful search. Mirsky and Pauling were the first to propose that a native protein is composed of chains of amino acids known as polypeptides, which are folded in a regular way. Very soon after, Pauling realized that a key issue was the precise nature of this folding. Luckily, some clues began to emerge in the early 1930s thanks to X-ray diffraction experiments. In the application of this powerful technique, scientists project a beam of X-rays onto a crystal, and then try to reconstruct the structure of the crystal from the way in which the invisible rays bounce off the sample. However, the X-ray photographs were quite blurry, and did not allow a reliable determination of the structure. However, the photos seemed to indicate that the structural unit was repeated along the axis of the hair every 5.1 angstroms. Figure 11 shows a schematic drawing of the kind of general structure I had in mind. This general characteristic turned out to be extremely important because it greatly restricted the number of possible structures, and Pauling, therefore, hoped to be able to identify the correct configuration. But science rarely progresses exactly as expected. Figure 11 When a promising hypothesis does not work, scientists often try to improve the quality of available experimental data, since better information can reveal clues that were previously hidden. Corey enthusiastically devoted himself to these investigations, and in 1948 he and his collaborators at Caltech had managed to unveil the precise architecture of a dozen of these compounds. I caught a cold and for about three days I had to stay in bed. Within two days I had grown tired of reading detective and science fiction stories, and I began to twist the structure of proteins. Pauling began his new attack on the puzzle based on the assumption that all the amino acids of alpha keratin should occupy a structurally similar position with respect to the chain of polypeptides. To stabilize the construction, Pauling formed hydrogen bonds between one turn of the propeller and the next, parallel to the axis of the propeller. In fact, he found two structures that could work, one of which he called alpha helix, and the other gamma helix. That Pauling managed to find solutions to the problem with such rudimentary tools attests to the crucialness of his previous discoveries about the planarity of the peptide group. Without planarity, the number of possible conformations would have been too great. Excited, Pauling asked his wife to bring him a slide rule, so he could calculate the repetition distance along the fiber axis. He discovered that the structure of the alpha helix was repeated with 18 amino acids in five turns. That is, the alpha helix had 3.6 amino acids per turn. The gamma helix left a hole in its center that was too small to be occupied by other molecules, so Pauling turned his attention to the alpha helix. So, although he was very satisfied with his alpha helix, he decided not to publish the model until he better understood the reason for the discrepancy in the spacing. Concerned about the possibility that there was still something wrong with his model, and at the same time uneasy that Cavendish's group could win the race and analyze it before him, Pauling kept silent about the alpha helix. The problem, however, had him obsessed. Pauling was interested to know especially if Branson could find the third helical structure that would satisfy the restrictions of flat bonds in the peptides and a maximum of hydrogen bonds that would confer stability. Branson and Weinbaum also confirmed that the alpha helix, which was the tighter of the two helices, was characterized by a distance of 5.4 angstroms between turns. Pauling now had a dilemma: ignore the incongruity with the X-ray data and publish his model, or delay the publication until the mystery was solved. The idea on which X-ray crystallography is based was simplicity made genius. Although it was relatively easy to make grids thin enough for visible light, it was impossible to produce them for X-rays, whose wavelengths are several thousand times smaller than those in the visible part of the spectrum. Surprisingly, he achieved this important result during his first year as a student researcher in Cambridge. The team formed by father and son was then dedicated to build the X-ray spectrometer that allowed them to analyze the structure of many crystals. Pauling read the thirty-seven pages avidly and was relieved to find that the researchers had described about twenty structures, but the alpha helix was not one of them. In addition, they concluded by saying that none of the structures examined constituted an acceptable model for alpha keratin. On the one hand, none of Bragg's models supposed the planarity of the peptide group, of which correction Pauling was absolutely convinced. On the other hand, the Cavendish team seemed to be bent on the idea that there must be an entire number of amino acids in each turn of their structures. Pauling's alpha helix ended the tradition, with 3.6 amino acids per turn, and he saw nothing wrong with it. Perutz would explain later that to put his team in position, Bragg had stuck tacks on the broomstick that represented amino acid residues, arranging them according to a helical pattern with an axial distance between successive turns of 5.1 centimeters. Pauling had always had a very competitive nature. Although he had been reassured that the Cambridge team had overlooked several important issues, the publication of Bragg's article set him off for fear of being overtaken. This raised the possibility that this last characteristic of the X-ray photographs of the hair was no more than an artifact produced by superimposed reflections instead of an important clue about its structure. It was a happy coincidence that this important article was sent precisely on the day of Pauling's fiftieth anniversary on February 28, 1951. Dunitz recalled that in 1950 Pauling always used the term "spiral" to describe the structure of alpha keratin. Pauling replied that a spiral could be equal to two or three dimensions, but added that, well thought, the word "propeller" liked it more. Its title read: «The structure of proteins: Two helical configurations with hydrogen bonds for the polypeptide chain». Pauling was then so sure of his model that they followed his article on the alpha helix with a flood of articles on the folding of the polypeptide chains. The structure seemed to be absolutely correct. On the other hand, how could Pauling and Corey's propeller be correct, however beautiful it was, if it had the wrong repetition? I returned home by bicycle for lunch, and I ate unconscious of my children's chatter, without even answering my wife's questions, who wanted to know what was wrong with me. After thinking a little more about Pauling's model, Perutz understood that the alpha helix looked like a spiral staircase in which amino acid residues formed the rungs. The height of each step was about 1.5 angstroms. None of the models in the Bragg group would have produced that mark, while it would be an unequivocal sign of Pauling's alpha helix. The calculations predicted that optimal conditions for observing the reflection would have required the fibers to tilt to form a angle of about 31 degrees. Perutz felt inescapably pushed to perform the definitive test immediately. Perutz showed Bragg the X-ray early in the morning on Monday. Bragg asked him what the idea had given him so suddenly to perform the final test. Perutz replied that he was infuriated with himself for not thinking about the alpha helix. Bragg replied with what has become an immortal phrase: "I wish I had enraged you before!" The mark of life Not everything Pauling wrote in that famous series of articles of 1951 was correct. A thorough examination of his complete work of that year reveals several weaknesses. In particular, the gamma propeller ended up being abandoned. However, these small shortcomings do not detract in any way the pioneering achievement of Pauling: the alpha helix and its prominent role in the structure of proteins. Pauling's contributions to our knowledge of the nature of life were considerable. Pauling's influence on the general theory and methodology of molecular biology was equally impressive. And so it is: the structure of many organic molecules, from proteins to nucleic acids, fully confirms this prediction. Second, Pauling was one of the pioneers of model building, which he turned into an art of prediction based on strict rules of structural chemistry. Even the three-dimensional models of colors developed at Caltech became coveted objects of macromolecular research. These models, produced for other laboratories by the Caltech workshop, reached in 1956 a value of 1220 dollars for a set containing about six hundred models of atoms. There is another remarkable observation by Pauling about the genetics he made in 1948 during a conference, although it seems that even he did not understand all its implications at the time. Thomas Hunt Morgan and his colleagues recognized those units in the genes that are arranged linearly in the chromosomes. Later, towards the end of his lecture, he added the following comment: The detailed mechanism by which a gene or a virus molecule produces replicas of itself is not yet known. In general, the use of a gene or a virus as a template would lead to the formation of a molecule with a non-identical structure, but complementary. It can happen, of course, that a molecule is at one time identical and complementary to the template on which it has been modeled. However, this case seems too unlikely to have a general validity, if it is not as follows. The first works on the structure and constitution of nucleic acids, due to the biochemist Phoebus Levene, did not help to awaken interest in these molecules. Rather they got the opposite. Levene managed to distinguish between deoxyribonucleic acid and ribonucleic acid, and discovered some of its properties. But their results generated the impression that these were quite simple and boring substances, inadequate for the complex tasks of the government of growth and replication. This impression persisted throughout the 1940s. This was stated in the article: "If it is proved beyond any reasonable doubt that the transformative activity of the material described is really an inherent property of the nucleic acid, it should still be explained in chemical terms the biological specificity of its action ». But he did not accept it; You see, I was so delighted with the proteins that I thought these were probably the hereditary material, not the nucleic acid. " Ronwin replied by pointing out that there were other substances in which phosphorus was linked with five oxygen atoms. Pauling decided he had nothing to lose and wrote to Wilkins to ask if he was ready to share his images. Your mathematical training It would be crucial to the discoveries he would make. But that was not what Franklin intended when he signed to go to King's, and he had good reason to think as he thought. Franklin and Wilkins were destined to crash, and this is what happened. They ended up working separately even though they shared the same laboratories. Watson described Crick as "without a doubt the brightest person I've ever worked with and as close to Pauling as I've ever seen." The two men brought a very different but complementary baggage, as well as their character traits and temperament. It is entertaining to read how each one described the personality of the other. And he added that Crick "spoke louder and faster than anyone." Despite their different baggage, something in them fit right away. Crick suspected that it was because both of them had by nature "a certain arrogance of youth, rudeness and impatience with careless argumentation." His way of thinking was also quite similar. There was something else that made the collaboration between Watson and Crick so powerful. Since neither of them was professionally the superior of the other, they could afford to be brutally frank in criticizing each other's ideas. This kind of intellectual honesty is sometimes lacking in relationships that are hindered by formal politeness, by bowing to a superior, or by abuse of authority. That turned out to be absolutely true. These were the same images that Pauling had asked Wilkins. Finally, he answered politely that he could not share his images until he himself had the opportunity to do some additional research. So by the end of 1951, Pauling had not yet managed to see any X-ray diffraction image of reasonable quality. The main reason seemed to be Pauling's model of the alpha helix for a protein. " By introducing a non-integer number of amino acids into each loop of his alpha helix model, Pauling had further expanded the horizons of traditional structural crystallographers. Towards the end of 1951, events began to happen quickly. The model consisted of three helical strands supported by a sugar-phosphate spine in its interior, while the bases pointed outwards. The presentation of the first model of Watson and Crick turned out to be a complete disaster. Apparently, part of the error was because Watson did not understand well a crystallographic term that Franklin had used in his lecture the week before. This unfortunate confusion led Crick to believe that the number of possible configurations was quite limited. One of the arguments that is adduced is that his ideas are derived directly from statements made during a colloquium and that seems to me as convincing as his own argument that his approach comes quite from nothing. A draft response written by Watson and Crick two days later indicates that "we all agree that we must reach a friendly settlement." In the meantime, Franklin was making important progress. One of the forms, which he called "A", was crystalline. The other, form "B", was more extensive and contained more water. In all his research, Franklin showed that his way of thinking was clearly different from Pauling's. Franklin abhorred "grounded assumptions" and heuristic methods, and insisted on relying on X-ray data to get it to the correct answer. For example, although in principle there was nothing against spiral structures, he flatly refused to assume their existence as a working hypothesis. In Crick's words: "He just wanted an answer, and he did not care if he got it with solid or flashy methods. The only thing that mattered was to arrive as soon as possible. " However, as Astbury and Beighton apparently thought that it was a mixture and not a pure configuration, they did not make public the existence of those plates. The X-ray pictures available showed strong reflection at approximately 3.4 angstroms, but not much more. As a starting point, Pauling went back to examining Ronwin's article. Ronwin had put the four bases on the outside of the structure and the phosphates in the center. Following his line of argument, Pauling embarked again on what has become known as his "stochastic method." The next step was to find out how many more strands the propeller needed. Pauling decided to address this problem by calculating the density of the structure. However, before he was even able to start, an unexpected distraction stopped him in his tracks. It informed him that his passport could not be issued because, in the opinion of the department, his trip "would not be in the interest of the United States." At first, Pauling saw no more than a slight inconvenience in the rejection, and he was convinced that the problem would be easily solved. Pauling wrote, frustrated: "I am convinced that no damage will be derived for the nation from the trip I propose." The president's secretary responded politely that the Passport Division had been asked to re-evaluate his decision. However, the decision was not revoked. In April, with more urgency, Pauling carried out a series of actions. First, he sought the advice of a lawyer. Second, he sent affidavits of allegiance and affidavit to the Passport Office as he was not a Communist. Finally, he managed to meet in person with Ruth Shipley. Unsurprisingly, Pauling's pains and tribulations with the passport scandalized scientists around the world. The international pressure finally took effect. Apart from its political significance, the passport debacle had some scientific consequences. There they showed him the excellent X-ray plates they had obtained. However, it seems that Corey did not understand at that time the full implications of the photographs, because he did not mention anything of importance to Pauling. Entire volumes of speculation have been written about what might have happened if Pauling himself had obtained permission to travel and had seen those photographs. But the truth is that all these speculations are quite irrelevant. It has to do with the bases of nucleotides. The following anecdote demonstrates the extent to which emotional responses can interfere even with processes that are supposedly governed by the purest scientific reasoning. They traveled aboard the famous Queen Mary. That did not go with Pauling, who was usually an easy-going person and, on that occasion, he was looking forward to a relaxed vacation. Similarly, the number of guanine units was equal to the number of cytosine units. A conversation he had had with Crick in England that summer had given him an idea of ​​how he could finally solve the puzzle of protein reflection at 5.1 angstroms. The protein coat of the virus stayed outside the bacterial cell and played no role in the infection. But not everyone was convinced. In fact, Hershey himself cautiously pointed out that it was still unclear if his result had a fundamental meaning. This led him to a surprising conclusion: "The cylindrical molecule is made up of three chains, rolled together so that each chain is a helix." In other words, having been convinced that a two-strand helix would produce too low a density, Pauling opted for a three-strand helical structure. This structure became known as a triple helix. The next problem he had to face concerned the nature of the nucleus itself of the helical design of three chains, the part of the molecule closest to the axis. The question was: which of the three known components of the nucleotides formed the nucleus? The three-strand molecule was held together thanks to the hydrogen bonds between the phosphate groups of the different strands. This structure looked promising, but Pauling still had some problems. The center of the molecule now seemed so crowded with the three chains of phosphates that reminded those competitions to see who can get more people in a phone booth. Pauling knew that the phosphate ion had a tetrahedral shape, with a central phosphorus atom surrounded by four oxygen atoms located at the vertices of a pyramid. In this process, Pauling followed the same instincts that had previously led him to triumph with the alpha helix. He believed that if he could find a structural chemical solution that was generally congruent with X-ray data, the rest of the problems would be solved alone. Pauling did not have an answer, but he assumed that one would be found as soon as the main architecture was resolved. The pace of work was frantic. Pauling even called a small group of scientists in his laboratory for an informal presentation on Christmas Day. Towards the end of the month, I thought I already had an essentially correct model. The article began by saying: "Nucleic acids, as constituents of living organisms, are of comparable importance to proteins." The structure explains some of the characteristics of X-ray photographs, but detailed density calculations have not yet been made, and the correction of the structure can not be considered proven. In other words, although there were still some fringes to be fixed, Pauling wanted to establish his priority. In contrast to the cautious spirit of the scientific article, in his personal communications on the proposed model, Pauling expressed more confidence and was very optimistic. My impression is that in a month, more or less, we will send a manuscript with a description of the structure, but I have almost no doubt about the correctness of the structure that we have discovered. The structure is really beautiful ». Peter's office was in an office with four other colleagues. To my right was the table Jim Watson occupied from time to time. That happened on January 13, 1953. Peter added to his letter a brief comment about the pressure felt by British scientists: "Today they told me a story. You already know how children are threatened by telling them "you better behave yourself or the coconut will come". With the memory of Pauling's previous victory with the alpha helix still fresh in everyone's mind at Cambridge, the two young men wondered if that would not be a catastrophic example of déjà vu. There are no interesting girls, only young people affected who are only interested in sex, in an indirect way ». Then, after examining the illustrations for a few minutes, I could not believe what his eyes were seeing. Pauling's structure, with the phosphates in the center and the bases on the outside, looked surprisingly like the model he and Crick had discarded. Pauling's nucleic acid molecule was simply not an acid. That is, it could not release positively charged hydrogen atoms by dissolving in water, which is the definition of acid. In addition, there was no way to extract those hydrogen atoms, as they were the fundamental bonds that held the three strands together by means of hydrogen bonds. It was hard for Watson and Crick to accept the enormity of that mistake. The best chemist in the world had built a completely flawed model, and not for some subtle biological characteristic but for a tremendous blunder of the most elementary chemistry. To the satisfaction of Watson, everyone confirmed the unthinkable: Pauling had prepared a chemical botch. There were only two things left to do that day. If he and Watson did not immediately start modeling, he argued, it would not be long time before Pauling discovered his mistake and revised his model. Crick calculated that they should not have more than six weeks to develop a correct model. As soon as he opened his doors that night, we were there to drink and drink for Pauling's failure. " Anatomy of an error Let's try to analyze, one by one, the causes of Pauling's failure. However, it was not until November 1952, a whole year later, when he began to work seriously on the problem. Even so, at the end of December 1952, after just one month of work, I had already sent an article for publication! This contrasts with the efforts he devoted to the structure of the polypeptide, which kept him busy for about thirteen years and which he did not publish until he felt quite confident about his model. Maurice Wilkins certainly believed that. It is not possible that he himself spent more than five minutes on the problem. " We will return to the possible reasons for this haste and its apparent lack of concentration later on. Then there are the two surprising lapses of memory: one on the quotients between Chargaff bases and the other on the principle of complementarity of Pauling himself. Pauling would later claim that he had come to know these quotients but had forgotten them. The consequence was that his model did not make any sense in the light of the chemistry data. " Pauling's second memory failure was even more incredible. Recall that Pauling had said in 1948 that if the genes were formed by two parts that were complementary to each other in their structure, replication would be relatively simple. Clearly, this principle of self-complementarity strongly suggests an architecture based on two strands, and was markedly incongruent with a three-strand structure. However, as we have already seen, Pauling made no special effort to see the Franklin plates. Nobody could have written a fiction novel in which Linus had made a mistake like that. A thorough examination of the many potential causes of Pauling's calamitous model raises a series of questions at a deeper level: how can we explain the rush, the apparent lack of effort, the forgetfulness and the neglect of some of the basic rules of the chemistry? Nothing can be further from reality. The only person that could be imagined in the race was Jim Watson ». That is certainly true, but it can only be part of the explanation, since Pauling had shown much more caution and patience in the case of the alpha helix. In this sense, it is a classic case of inductive reasoning: the common strategy of making probabilistic conjectures based on past experience, only taken too far. Everyone uses inductive reasoning continuously, and often helps us make correct decisions from relatively poor data. Most people would probably answer "dramatist," and in doing so would be perfectly justified. Although there is nothing illogical in completing the phrase with "cook" or "card player," the most likely word is "dramatist." Inductive reasoning is what allows us to use our accumulated experience to solve problems by choosing the most likely response. Like some experienced chess players, we do not usually analyze all the possible logical answers, but choose the one we consider most likely. This is an essential part of our cognition. However, since inductive reasoning involves the elaboration of probabilistic conjectures, sometimes it is wrong, and occasionally it can be very wrong. Pauling believed he could take a shortcut because past experience had shown him that all his structural intuitions had turned out to be true. But why did he think he had to shorten distances? On a somewhat more speculative basis, Pauling's decision from running to publication could be related to a human cognitive bias known as the framing effect, which reflects a strong aversion to loss. Consequently, people tend to risk when presented with a negative frame. It is possible that Pauling preferred to take the risk when faced with the possibility of a probable loss. Then there is the disconcerting question of Pauling forgetting Chargaff's rules and, more importantly, his own speculations about the self-complementarity of the genetic system. Pauling and Corey discuss the biological implications of their model just in passing. In the opening paragraph of their article, they mention without much enthusiasm that there are indications that nucleic acids are "involved" in the processes of growth and cell division, and that they "participate" in the transmission of hereditary characters. The forgetting of Chargaff's rules is, in my opinion, less mysterious. Entangled in his attempts to complete his work on proteins and in his bitter political confrontations with McCarthyism, he barely had time to concentrate. That could not help at all. Extensive studies by Swedish researchers show that natural memory problems occur much more frequently when attention is divided or has to be directed elsewhere quickly. Therefore, Pauling's not remembering Chargaff's rules is not too surprising. The most famous chemist in the world, and screw up in a matter of elementary chemistry? I asked the molecular biologist Matthew Meselson what he thought about this aspect of the error. His reasoning must have been something like this: he had a successful model for proteins consisting of a helical strand with side chains to the outside. This created a problem of packing along the axis, but the rest of the characteristics, in Pauling's mind, were in a certain sense details that could be solved later. Once again, his previous success with the alpha helix had the effect of blinding him. Unfortunately, as we all know, the devil is often precisely in those details. Errors do not harm science because there are a lot of smart people who will immediately identify the error and correct it. The only thing that can happen is that you stay like a fool, but that does not produce any harm, it only hurts the pride. On the other hand, if it turns out to be a good idea and you do not publish it, science will suffer a loss. I must say that I completely agree with the "forgive" part, but I think we should not forget. As I have tried to show here, there is much that can be learned by analyzing the mistakes of such brilliant people. A lot of ink has been spilled on the question of the ethical nature of this concrete act. In my humble opinion, there are three main parts of this story that deserve attention. Second, there is little doubt that Franklin should have been consulted before his unpublished results were shared with members of another laboratory. Everyone can form their own judgment. Be that as it may, the effect of photography on Watson was dramatic: the dark cross was the unmistakable sign of a helical structure. It is not surprising that, as he himself would describe later, he would be "speechless" and his "pulse quicken". Watson and Crick spent the next few weeks frantically trying to build models in which the bases formed the rungs of the spiral staircase they had in mind. The first attempts were unsuccessful. Then there was the question of the link between the two bases of each rung and between the rung and the "legs" of the ladder. By placing these atoms in their correct positions, new possibilities were opened to establish links between the bases. The steps acquired the same length. In addition, this pairing offered a natural explanation of the rules of Chargaff. The resulting structure was the celebrated double helix, in which two helical strands were made of alternating sugars and phosphates, and the paired bases bound to the sugars forming the rungs. Figure 16 shows the place in the Eagle where Crick made his announcement. One of the documents recovered with the "lost" correspondence of Crick is a draft of the letter that was to accompany the manuscript. Since Bragg has not seen her yet, I beg you not to show it to anyone. If we do not receive a response from you within a day or a little more, we will assume that you have no objection to your current form. Actually, the two King's groups also sent articles to Nature. Accompanying the note was a draft of Wilkins' manuscript. We want to see yours, and I do not doubt that she will want to see ours. " Perhaps the most fascinating aspect of the new correspondence is related to Pauling. That would inevitably mean that Pauling would demonstrate the structure and not you. " If we suggested to her that it would be good if she did not do it, we would only encourage her to do it. This exchange of views is a perfect demonstration of the respect that Pauling still inspired even in one of the lowest moments of his career. The first was the real milestone that constitutes the article by Watson and Crick, in which they describe the structure of the double helix. The article occupies little more than a page, but a little page! They kindly allowed us to have their manuscript available before publication. " However, they immediately add: "In our opinion, this structure is not satisfactory." The Watson and Crick model immediately suggested a solution to both the way of encoding the genetic information and the puzzle of how the molecule manages to copy itself. It follows that in a long molecule many different permutations are possible, and therefore it seems likely that the precise sequence of the bases is the code that carries the genetic information. " The process of copying is done by "undoing" the center of the staircase of the double helix to produce two halves, each with one of the legs and half of each of the steps. Crick later explained that this enigmatically economic phrase was, in fact, a compromise between his own desire to discuss the genetic implications in the first article and Watson's concern because the structure still proved to be incorrect. Therefore, affirmation is merely a way of establishing priority. The fact that Watson still harbored doubts about the model is well documented in his letters from that time. The structure is probably helical. The phosphate groups are located outside the structural unit, in a helix with a diameter of about 20 angstroms. Sadly, Rosalind Franklin died of cancer in 1958 at thirty-seven. Since the Nobel Prize is not awarded posthumously and can not be shared by more than three people, we will never know what would have happened if Franklin had lived until 1962. In 2009 the famous photograph 51 became the title of a successful play of Anna Ziegler Nobody likes to admit defeat, and scientists are no exception. If the people of King's College express interest in visiting their space, maybe that could be scheduled on the same day. I do not intend, however, to address them on the matter. I find the structure very interesting, and I have no strong argument against it. But I do not think their arguments against our structure are strong either. And he concludes: "I think Wilkins' photographs could settle the question in a definitive way." At that congress of the world's leading researchers, Bragg first announced the double helix. One could certainly tell me that there is nothing especially "great" about Pauling's mistake; after all, your model I put out what was inside and had the wrong number of chains. What matters now are his perfections, not his ancient imperfections. I remember above all the Pauling of fifty years ago, when he proclaimed that behind life there were no life forces, only chemical bonds. Without that message, maybe Crick and I would never have achieved success. " Along the way there were many surprises. For example, before 2000, biologists believed that the human genome contained about one hundred thousand protein-coding genes. The possibilities vary from a significant lengthening of the life expectancy of humans to the creation of new life forms. The era of the genome has allowed previously unimaginable achievements in forensic science. That effort eventually led the researchers to an army lab as the most likely source of the strain. But inquiries have penetrated a much more fundamental level than the purely biological: where do the fundamental pieces of life come from, those molecules that can carry information and replicate? And on the physics side, going back to even more ancient origins, how did the hydrogen atom that was crucial for Pauling's hydrogen bond appear in the universe? Although Gamow's mathematical solutions ultimately turned out to be wrong, they helped to frame the questions of biology in the language of information. About five years earlier, Gamow was involved in solving an even more fundamental problem: the cosmic origin of hydrogen and helium. His solution was really brilliant. However, it did not explain the existence of all elements heavier than helium. This formidable task fell to another astrophysicist and cosmologist: Fred Hoyle. On the one hand, Hoyle dealt with the evolution of the universe as a whole, and on the other hand, with the appearance of life within it. He was at the time one of the most distinguished scientists of the 20th century, and one of the most controversial. These theories were based on the hypothesis that all matter in the universe was created in a big explosion at a particular moment in the remote past. But now it turns out that in one aspect or another, all these theories come into conflict with what the observations require. This conference marks the birth of the term "big bang", which since then has been inextricably linked to the initial event from which our universe sprang. Contrary to popular belief, Hoyle did not use the term in a pejorative way; he was just trying to create a mental image for his listeners. Ironically, who coined and popularized the term big bang was a scientist who always opposed the idea on which this model is based. The term has resisted even a public referendum. In 1993, Sky & Telescope magazine solicited suggestions from its readers for a more appropriate name, in an act that has generally been seen as an attempt at political correctness of cosmic proportions. But when the three judges reviewed the 13 099 suggestions, they found none that deserved to be the substitute. The seven episodes of the series were aired in 1961, with actress Julie Christie in the lead role. His mother studied music, and for some time played the piano at a local cinema, as an accompaniment to silent films. From an early age, Hoyle showed signs of liking independence and, occasionally, dissension. His disdain for conventions accompanied him until his university years. In 1939 he decided to renounce a doctor's degree for the "pragmatic motive", in his words, of not having to pay more taxes! It is not surprising that this independent thinker, motivated by curiosity, matured into a brilliant scientist. For his contributions to astrophysics and cosmology, Hoyle was probably the most prominent figure for at least a quarter of a century, but at the same time, he never shied away from the controversy. To make it Without becoming a nutcase you need a fine judgment, especially in relation to long-term issues that can not be resolved in a short time. We will soon discover that Hoyle followed his own advice too much to the letter. Even without World War II, 1939 was a critical year for Hoyle. It happened that one after the other, his two research directors left Cambridge to fill positions elsewhere. His third director was the great physicist Paul Dirac, one of the founders of quantum mechanics, the revolutionary theory of the subatomic microworld. After the abundance of novel ideas of the 1920s, the science of the late 1930s paled in comparison. Hoyle took the warning to the letter, abandoned theoretical nuclear physics and directed his interests towards the stars. Of Hoyle's many achievements, I want to focus only on a few of his contributions to a specific topic: nuclear astrophysics. Hoyle's work in this area has become one of the main pillars on which our current knowledge of the stars and their evolution rests. Throughout his career, he solved the mystery of how carbon atoms, the support of complexity and life as we know them, are formed in the universe. However, to assess Hoyle's accomplishments in its proper measure, we must first understand the backdrop against which he produced his masterpiece. They are elements those substances that can not be decomposed or modified by chemical means. The engraving was made in the nanotechnology center of the university. The periodic table currently contains 118 elements, of which 94 are naturally found on Earth. Or, otherwise, if these rather complex entities could have simpler origins. Figure 19 There was actually someone who raised these questions before even the publication of the periodic table. In two articles published in 1815 and 1816, the English chemist William Prout proposed the hypothesis that the atoms of all elements were actually condensations of different numbers of hydrogen atoms. Eddington proposed in 1920 that four hydrogen atoms could somehow combine to form a helium atom. Eddington estimated that in this way the Sun could shine for billions of years by converting only a small fraction of its mass of hydrogen into helium. A few years later, Eddington further speculated that stars like the Sun could be natural "laboratories" in which somehow, through nuclear reactions, some elements were transformed into others. The Eddington and Perrin hypothesis points to the birth of the concept of nuclear nucleosynthesis in astrophysics: the idea that at least some elements can be synthesized in the igneous interior of stars. As already inferred from the above, Eddington was one of the greatest defenders of Einstein's theory of relativity. I was just wondering who the third one was. " To continue with the history of the formation of the elements, we should remember some of the basic properties of atoms. What follows is a very brief update. All common matter is composed of atoms, and all atoms have a nucleus at their center, around which electrons move in orbital clouds. Although confined in the nucleus, neutrons are stable, a free neutron is unstable and with a half-life of about fifteen minutes it disintegrates into a proton, an electron and a virtually invisible, very light and electrically neutral particle called antineutrino. In unstable nuclei, neutrons can disintegrate in the same way. The simplest and lightest atom that exists is the hydrogen atom. It is formed by a nucleus that contains a single proton. A single electron spins around this proton in orbits whose probability can calculate using quantum mechanics. Hydrogen is also the most abundant element in the universe, where it makes up about 74 percent of all common matter. Baryonic matter is the substance from which stars, planets and human beings are made. If we move from left to right through the periodic table, in each step the number of protons in the nucleus increases by one, and so do the number of electrons in orbit. Figure 20 As the number of protons equals the number of electrons, the atoms are electrically neutral in their unaltered state. The element that follows hydrogen in the periodic table is helium, which has two protons in its nucleus. But the helium nucleus also contains two neutrons. Helium is the second most abundant element: it constitutes about 24 percent of the common matter. Hydrogen has the number 1, helium is the 2, iron the 26 and uranium the 92. The total number of protons and neutrons in the nucleus is called the atomic mass. The nuclei of the same chemical element can have different number of neutrons, and are known as isotopes of that element. For example, neon can have isotopes with ten, eleven or twelve neutrons in the nucleus. Similarly, hydrogen also has in nature an isotope commonly known as deuterium and an isotope called tritium. Returning to the central problem of the synthesis of the different elements, the physicists of the first half of the 20th century faced a series of questions related to the periodic table. The first and most important is how all those elements were formed. Or, also, why the stars are composed mostly of hydrogen and helium. However, as Kelvin had clearly shown, this reserve could only sustain the Sun's radiation for a limited time, no more than a few tens of millions of years. This boundary clashed eerily with the geological and astrophysical clues that pointed more and more accurately to ages of billions of years for both the Earth and the Sun. Eddington was fully aware of this obvious discrepancy. But if we decide to bury the body, let's frankly acknowledge the position we are in. A star is feeding on some vast reserve of energy by means we do not know. This reserve can hardly be other than subatomic energy, which, as is known, exists in abundance in all matter. Despite his enthusiasm for the idea that stars could extract their energy from the fusion of four hydrogen nuclei to form a helium nucleus, Eddington did not have a specific mechanism by which this process could be carried out. In particular, the problem of mutual electrostatic repulsion, discussed above, had to be solved. The obstacle is the following: two protons repel each other electrostatically because both have positive electric charge. This Coulomb force has a long range, and that is why it is the dominant force between protons at distances greater than the size of the atomic nucleus. However, within the nucleus can be the powerful attraction of the nuclear force, which overcomes electrical repulsion. The only downside of Eddington's hypothesis was that the temperature calculated for the center of the Sun was not high enough to impart the necessary energy to the protons. In classical physics, that would have been a death sentence for his theory; particles with insufficient energy to overcome a barrier can not jump. Luckily, quantum mechanics, which is the theory that describes the behavior of subatomic particles and light, came to the rescue. In quantum mechanics, particles behave like waves, and all processes are inherently probabilistic. The waves do not occupy a precise place like the particles, but they are dispersed. In an extraordinary article published in 1939, Bethe discusses two possible pathways with energy production by means of which hydrogen can be converted into helium. The second mechanism, known as the carbon-nitrogen cycle, is a cyclic reaction in which carbon and nitrogen nuclei act as catalysts. The net result is still the fusion of four protons to form a helium nucleus, accompanied by the release of energy. As already observed, and as the name implies, the CN cycle requires the presence of carbon and nitrogen atoms as catalysts. But Bethe's theory failed to show how these carbon and nitrogen atoms formed. Bethe took into account the possibility that carbon was synthesized by the fusion of three helium atoms. Bethe concluded: "We must assume that the heavier elements were manufactured before the stars reached their current state of temperature and density." Bethe's statement posed a great enigma, since astronomers and geologists of the time had come to the conclusion that the different chemical elements had to have, for the most part, a common origin. In particular, the fact that atoms such as carbon, nitrogen, oxygen and iron appear to have approximately the same relative abundances throughout the Milky Way galaxy clearly pointed to the existence of a universal process of formation. Consequently, if they accepted Bethe's sentence, the physicists had to think of some common synthesis that could have acted before the current stars reached their equilibrium. The idea was great in its clarity. In the dense fireball of origins, as argued by Gamow and Alpher, the matter consisted of a highly compressed neutron gas. They called this primal substance Ylem. As these neutrons began to disintegrate into protons and electrons, in theory all the heavier nuclei could be produced by the successive capture of a new neutron from the sea of ​​neutrons. Atoms thus climbed the periodic table, climbing a step with each successive capture of a neutron. It was assumed that the entire process was controlled by the probability that a given nucleus would capture another neutron, but also by the expansion of the universe. The cosmic expansion determined the global reduction of the density of matter over time, and, consequently, the progressive diminution of the rates of nuclear reactions. Alpher carried out most of the calculations, and the results were published in the April 1, 1948 issue of Physical Review. Bethe agreed to have his name included, and today that publication is often referred to as the "alphabetical article." The challenge is not difficult to understand if we use a simple mechanical metaphor: it is very difficult to climb a ladder when some of the steps are missing. In nature, there are no stable isotopes with atomic mass of 5 or 8. Atomic masses 5 and 8 are missing. Consequently, helium can not capture another neutron to produce a nucleus that has a sufficient half-life to give continuity to the scheme of the capture of neutrons. Lithium has a similar difficulty because of the vacuum for the atomic mass 8. Thus, the holes in the masses thwarted progress in the way that Gamow and Alpher had opened. Even the great physicist Enrico Fermi, who examined the problem in some detail with a collaborator, was disappointed to conclude that the synthesis in the big bang was "incapable of explaining the way in which the elements were formed". This is where Fred Hoyle entered the scene. At that time, this observatory had the largest telescope in the world. From Baade, Hoyle learned how extraordinarily dense and hot the nuclei of massive stars can become during the last stages of his life. In a statistical nuclear equilibrium, although reactions continue to occur nuclear, each reaction and its inverse are produced with the same rate, so that there is no net change in the abundances of the elements. Consequently, Hoyle thought, he could use the powerful methods of the branch of physics known as statistical mechanics to estimate the relative abundances of the various chemical elements. However, to really carry out the calculations, he needed to know the masses of all the nuclei involved, and that information was not available to him during the war years. Hoyle had to wait until the spring of 1945 to obtain a table of the masses of hands of the nuclear physicist Otto Frisch. The result of the calculations he made was an article published in 1946 that made epoch, in which Hoyle delineated the framework of a theory of the formation of the elements, of the carbon upwards, inside the stars. Our entire solar system was constituted about 4500 million years ago from a mixture of ingredients cooked in the interior of previous generations of stars. When analyzing the consequences of his embryonic theory, Hoyle was pleased to discover a marked peak in the abundances of the elements that surround the iron in the periodic table, as the observations seemed to indicate. This coherence with the "iron peak", as it was to be known, told Hoyle that he must walk in the right direction. However, those steps that were missing from the ladder continued to impede any attempt to build a detailed network of nuclear reactions that produced all the elements. To dodge the problem of mass holes, Hoyle decided in 1949 to re-examine the possibility of merging three helium nuclei to create the carbon nucleus, and assigned this problem to one of his PhD students. But it turns out that that student decided to abandon his doctoral studies before completing them, but he forgot to cancel his formal registration. In the end, two astrophysicists published the results, although the work of one of them went almost unnoticed. The Estonian-Irish astronomer Ernst Öpik proposed in 1951 that in the contracted nuclei of the evolved stars, the temperature could reach several hundred million degrees. At those temperatures, Öpik argued, most helium would merge into carbon. Salpeter immediately recognized that it could hardly be expected that three cores of helium would collide simultaneously. Two of them were more likely to stay together long enough to be beaten by a third party. Salpeter immediately realized that maybe the carbon was produced by a low probability process in two steps. But there was still a serious problem: the experimental data showed that this particular beryllium isotope was disintegrating again into two alpha particles with a very short half-life of only about 10-16 seconds. The question was whether at a temperature of more than one hundred million degrees Kelvin, the rate of reaction could be so high that some of these ephemeral beryllium nuclei could fuse with a third helium nucleus before disintegrating. When reading Salpeter's article, Hoyle's first reaction was anger at himself for having allowed such an important calculation to slip through his fingers because of the setback with the doctoral student. About thirty years later he described the moment he understood this: "Bad luck for old Ed, I thought." But did this augur the failure of the entire scheme? This was precisely the kind of situation in which Hoyle revealed his incredible physical intuition and the clarity of his thinking. He started with the obvious: "There has to be some way to synthesize 12C." After all, carbon was not only relatively abundant in the universe, but crucial to life. After evaluating all the potential reactions in your head, Hoyle concluded that "Nothing was better than 3α". So how could carbon be prevented from being lost to oxygen? In Hoyle's mind, there was only one way: "3a had to go much faster than what was calculated." In other words, beryllium and helium had to be able to merge so easily and so quickly that the rate of carbon production was much higher than the rate of destruction. But what could substantially accelerate the rate of carbon synthesis? Nuclear physicists knew of one thing: a "resonant state" in the carbon nucleus. The resonant states are values ​​of the energy at which the probability of a reaction reaches a peak. That is, the probability that the unstable nucleus of beryllium absorbed another helium nucleus to form carbon would be greatly increased. But Hoyle did not just point out that a resonance could be useful, but he accurately calculated the energy level of the carbon core needed to obtain the desired effect. Nuclear physicists measure the energies of the nuclei in units called MeV. In addition, using the known symmetry of the 8Be and 4He nuclei, he predicted the mechanical-quantum properties of this resonant state. All this was impressive, but there was a "small" problem: that state was not known! The mere idea that Hoyle used general astrophysical observations to make an extremely accurate prediction of nuclear physics was nothing short of ridiculous, but Hoyle had plenty of self-confidence. All this happened in January of 1953, while Hoyle spent a sabbatical of a few months in Caltech. What happened in that meeting is already a legend. Leave us alone, young man, do not bother us ». I can not remember if he summoned Kellogg's clan there and then, or if it was a few hours later, or after a day or two... It was then that the general consensus decided that a new experiment should be carried out. Even Willy seemed somewhat skeptical. " Whaling, Dunbar and their collaborators decided to tackle the problem by bombarding nuclei of nitrogen with deuterium. This nuclear reaction produces carbon nuclei and alpha particles. And they end with a recognition: "We are indebted to Professor Hoyle for making us notice the astrophysical significance of this level." Despite his spectacular prediction success, Hoyle realized that it was not time to rest on his laurels. In other words, you had to be sure that there was no resonant state in the oxygen core that would accelerate the rate of carbon reaction with an alpha particle. To complete his triumph with the theory of carbon production, Hoyle showed that this reaction does not occur: the energy of the respective level in oxygen is about 1 percent lower than the value that would have made it resonant. One could have thought that with such a bomb in his hands, Hoyle would run to announce it to the world. Even in later years, Hoyle never gave much importance to that remarkable achievement. In 1986 he commented: In a certain sense, that was only a small detail. Others did not think this was "a small detail". And the ylem was without form and without number, and the nucleons ran crazily over the face of the abyss. And God said: Let the mass be two; and it was mass two. And God saw the deuterium and saw that it was good. And God saw the tritium and the tralfio and saw that they were good. And God continued to call one number after another until he arrived at the transuranic elements. But when he looked at His work, he saw that it was not good. God was very disappointed, and at first he wanted to contract the universe again and start everything from the beginning. But that would have been too easy. And so, being omnipotent, God decided to correct His error in an impossible way. And God looked at Hoyle and told him to make the heavy elements as he pleases. And Hoyle decided to make the heavy elements in the stars, and scatter them with the explosions of supernovas. But in doing so he had to obtain the same curve of abundance that would have resulted from nucleosynthesis in the ylem, if God had not forgotten to call mass five. This prediction was later verified experimentally ». In this article, published in 1954, Hoyle explained how the abundances of heavy elements that we observe today are the direct result of stellar evolution. The stars spend their lives in continuous struggle with gravity. In the absence of an opposing force, gravity would lead to the collapse of the stars towards its center. By "igniting" chemical reactions in its core, the stars create very high temperatures, and the associated high pressures hold the stars against their own weight. Hoyle described how, once the core fuel is consumed, the gravitational contraction causes the core temperature to rise to the "ignition" of the next nuclear reaction. In this way, Hoyle reasoned, in each episode of burning of the nucleus, new elements were synthesized until they reached iron. Without an internal heat source to combat gravity, the core of the star collapses, unleashing a spectacular explosion. These supernova explosions forcefully eject all elements forged into interstellar space, where they enrich the gas from which subsequent generations of stars and planets will form. The temperatures reached during the explosions are so high that elements heavier than iron are formed by the neutrons that bombard the stellar material. Figure 21 The scenario proposed by Hoyle is still today the image that represents the evolution of the stars. Surprisingly, this article, key to understanding the development of the theory of stellar nucleosynthesis, received relatively little attention in its time, perhaps because it was published in a new astrophysical journal that was still little known in the community of nuclear physicists. Willy Fowler was also impressed with Hoyle's prediction of the resonant level of carbon. In fact, he spent his next sabbatical in Cambridge to work with Hoyle. They described no less than eight nuclear processes that synthesize elements in stars and identified the different astrophysical environments in which these processes take place. The article was a genuine tour de force. Extensive, 108 pages, began with a romantic touch: two contradictory quotes from Shakespeare on the question of whether the stars govern the destiny of humanity. As much as they tried, Hoyle and his collaborators failed to explain the abundance of the lighter elements by forming them inside the stars. Helium, the second most abundant element in the cosmos, also proved problematic. This can be shocking, given that the stars are clearly made up of helium. After all, is not the fusion of four hydrogens to form helium the main source of energy for most stars like the Sun? The difficulty was not in the synthesis of helium in general, but in the synthesis of sufficient quantities of this element. Meticulous calculations had shown that nucleosynthesis in stars predicted for helium a cosmic abundance of only between 1 and 4 percent, while the observed value was 24 percent. This left the big bang as the only source for light elements, as Gamow and Alpher had suggested. Gamow wanted all the elements created within minutes of the big bang. Hoyle wanted all the elements to be forged inside stars during the long process of stellar evolution. Figure 22 Hoyle had the opportunity to present his vision of the history of the matter even in the Vatican. Among the two dozen guests were some of the scientists most outstanding astronomy and astrophysics of his time. The Dutch astronomer Jan Oort summed it up from an astronomical perspective. Figure 23 shows Hoyle shaking hands with the Pope. Figure 23 The rest, as they say, is history. Fowler received the Nobel Prize in Physics in 1983. Many people, including Fowler himself, believed that the award should have been shared with Hoyle. So why was not Hoyle awarded the Nobel Prize? Others believed that Hoyle's insistence on little or no orthodox ideas about the big bang, which we will discuss thoroughly in the next chapter, could have had something to do with the fact that he did not receive the prize. Ironically, before working for the navy in Witley, the British government had kept them locked up as enemy aliens because of their Austrian roots. But very soon he changed his mind: I also discovered that he had not been able to interpret Hoyle's apparent attitude of not listening. In fact, I listened with great care and had an extraordinarily good memory, as I would discover later, because I often remembered better than I did what I had told him. In 1945 the three returned to Cambridge, and until 1949 they spent several hours together at Bondi's residence each day. It was during this period that they began to think about cosmology, the study of the entire observable universe, treated as a unit. Hoyle suggested the subject of cosmology, since from his point of view "the question has been in disuse for a long time". Hoyle, who had already read the article before, decided to reread it with more attention. They both realized that this almost encyclopedic essay made a rather dispassionate review of various possibilities of cosmic evolution without offering an opinion. With his typical hipster style, Hoyle immediately began to think: "Have you really launched a broad enough network? At the same time, Gold was making its way to more philosophical ideas about the universe. These were the seeds of the theory of steady state cosmology, which was proposed in 1948. As we will soon discover, the theory was a serious competitor of the big bang for more than fifteen years before becoming the center of an often bitter controversy.. Chapter 9 The same for all eternity? However, most of those who remember him for his popular books and his prominent radio programs know him as a cosmologist and as one of those who conceived the idea of ​​a stationary universe. But what does it really mean to be a cosmologist? The question "How far away is the planet closest to Earth?" Is not a question of modern cosmology. Even questions on a much larger scale, such as "What is the distance from the Milky Way to the nearest galaxy?" Are not considered questions from cosmology. Cosmology deals with the average properties of our observable universe, which are obtained when it is averaged to the reach of our most powerful telescopes. Although galaxies tend to reside in small groups or large clusters, both sustained by the force of gravity, when we sample a volume large enough, the universe appears homogeneous and isotropic. In other words, there is no privileged position in the universe, and everything looks the same in all directions. Statistically speaking, any cosmic cube of five hundred million light years on the side or greater will look very similar in content, regardless of its place in the universe. This large-scale homogeneity becomes more accurate as we scale up to the "horizon" of our telescopes. The cosmology deals precisely with those questions that would give us the same answer regardless of the galaxy we are in or the direction in which we point the telescope. Today the The stipulation of homogeneity and isotropy is known as the cosmological principle, and the most powerful direct evidence of its validity comes from observations of the "flash of creation": the cosmic radiation of the microwave background. This radiation is a vestige of the primal ball of fire, hot, dense and opaque. It comes from all directions and is isotropic to more than one part in ten thousand. Explorations of large-scale galaxies also indicate a high degree of homogeneity. In all the prospects that comprise a sufficiently large section of the cosmos to constitute a "good sample", even the most conspicuous structures shrink and soften. Since the cosmological principle has been so effective when applied to different positions in space, it was natural to ask whether it could be extended to apply it to time as well. That is, can we say that the universe is invariable in its appearance on a large scale as well as in its physical laws? A funny end is that to pose that question the illustrious trio could have been inspired by a British horror film entitled Dead of Night. The cosmic expansion seemed to point rather to a linear evolution, with a dense and hot beginning and a clear direction for the arrow of time. All those galaxies moving, separating, and then there would only be a terribly empty space? But before plunging into this fascinating subject, let's go back for a moment to the 1920s. The story is especially pertinent because a new and unexpected turn in the chronicle of events created a great upheaval in the community of astronomers and historians of science in 2011. The Cosmic Expansion: Lost and Found When cosmologists say that our universe expands, they base this assertion fundamentally on the indications that are derived from the apparent movement of galaxies. There is a very simplified but very often used example that can help visualize the concept. Imagine a two-dimensional world that only exists on the surface of a rubber sphere. The galaxies of this world are simply small, round specks stuck on their surface. For the inhabitants of this world, there is neither the interior of the sphere nor the outer space; your entire universe is limited to the surface. It is important to understand that this world does not have a center; none of the surface spots is different from any of the other spots. This universe also has no limits or margins. If a point moved in any direction on the spherical surface, it would never reach a margin. Well, what would happen if this sphere were inflated? Regardless of the speck of the surface on which we live, we will observe that all the other specks move away from us. In other words, the speed of the recession will be proportional to the distance. Note that the expansion of the universe can not be compared to the explosion of a hand grenade. In this case, the explosion occurs within an existing space, and has a well-defined center. In the universe, however, the recessionary movement arises from the fact that the space itself is stretching. In fact, already in 1922, the Russian mathematician Aleksandr Friedmann showed that general relativity allowed an expanding universe, full of matter and without limits. Although few took note of Friedmann's results, the idea of ​​a dynamic universe began to gain influence during the 1920s. Consequently, the interpretation of Hubble's observations in the context of an expanding universe became popular quite quickly. Physicists sometimes tend to ignore the history of their discipline. After all, who cares who discovered what as long as the discoveries are disclosed. Only totalitarian regimes have had the obsession to insist that all good ideas are their own. In the first room, see a painting giant of a Russian man he had never heard of. When he asks who that person is, they say to him: "This is such and such, the inventor of the radio." In the second room there is another giant portrait of an absolute stranger. And so it continues for a dozen other rooms. In the last room there is a painting that in comparison dwarfs the rest of the paintings. The host smiles and replies: "This is the man who invented all the men in the previous rooms." In a few cases, however, the discoveries are of such magnitude that it can be very useful to understand well the path that led to those ideas, and the correct attribution. During 2011 a passionate debate erupted over who deserves credit for the discovery of cosmic expansion. In particular, a few articles raised the suspicion that during the 1920s some dishonest practices of censorship were applied to ensure that Edwin Hubble had the priority of discovery. Here, very briefly, the context and the most relevant facts of this debate. In February 1922, the astronomer Vesto Slipher had measured the radial velocities of forty-one galaxies. For the numerical value of that rate, which we know today as Hubble's constant, Lemaître obtained 625. Two years later, Edwin Hubble obtained a value of approximately 500 for that same amount. In fact, Hubble used essentially the same recession speeds without mentioning at any time in his article that they were from this author. Hubble used better distances, based in part on better stellar distance indicators. Lemaître was fully aware of the fact that the distances he had used were only approximate. He concluded that the available distance estimates then seemed insufficient to assess the validity of the linear relationship he had discovered. And this is where the plot is entangled. In the same note, Lemaître also calculated two possible values ​​for the Hubble constant: 575 and 670, depending on how the data was grouped. The South African mathematician David Block went even further. I began by examining the circumstances surrounding the translation of the Lemaître article. The most important paragraph of the letter reads as follows: Briefly, if Soc. Scientifique de Bruxelles is willing to give us her permission, we would like the article to be translated into English. I imagine that if there were additions a note could be inserted indicating that §§1-n are substantially from the Brussels article + the rest is new. Personally, and also on behalf of the Society, I am confident that I can do this. Figures 26a and 26b My immediate reaction was that the text of Smart's letter was completely innocent, and certainly did not suggest any attempt at editing or censorship. But although he was quite convinced of the correctness of this non-conspiratorial interpretation of Smart's letter, the two main mysteries remained unresolved. After inspecting hundreds of irrelevant documents and almost giving up, I discovered two "smoking guns." This is, of course, the decision mentioned in Smart's letter to Lemaître. Second, I found Lemaître's response to Smart's letter, dated March 9, 1931. I am sending you a translation of the article. I did not think it appropriate to reproduce the provisional discussion on radial speeds that clearly shows no real interest, as well as the geometric note, which could be replaced by a short bibliography of old and new articles on the subject. I enclose a text in French that indicates the passages omitted in the translation. I have made a translation as accurate as I could, but I would be very grateful if any of you were so kind as to read it and correct my English, which I fear is quite rudimentary. I have not changed any formula, I have not even modified the final suggestion, which is not confirmed by my recent works. Figure 27 This ends once and for all with speculations about who translated the article and who deleted the paragraphs: it was Lemaître himself! Lemaître's letter also opens a fascinating window into the scientific psychology of researchers in the 1920s. Lemaître was not at all obsessed with prioritizing his original discovery. Lemaître was officially elected as an associate on May 12, 1939. But that "prediction" turned out to be incorrect. In the words of Bondi, "That night dinner was quite late, and it did not take us long to say that it was a perfectly possible solution." In other words, an immutable universe, without beginning or end, began to seem more and more attractive to them. Thereafter, however, Hoyle adopted a somewhat different approach to the problem than his fellow scientists. The attitude of Bondi and Gold was based on an attractive philosophical concept. If the universe is evolving and changing, they argued, there is no clear reason to believe that the laws of nature have permanent validity. After all, those laws were established based on experiments done here and now. In addition, Bondi and Gold sensed that the cosmological principle, as originally enunciated, presented another difficulty. It supposed that observers located in different galaxies from anywhere in the universe would discern the same large-scale image of the cosmos. Hubble's determination of the rate of expansion implied a nightmarish scenario in which the universe was only 1,200 million years old, much less than the estimated age of the Earth! In particular, he developed his theory within the framework of Einstein's general relativity. It started from the observed fact that the universe was expanding. This immediately posed a question: if the galaxies are continually moving away from each other, does that mean that space is increasingly empty? Hoyle responded with a categorical no. In this way, Hoyle reasoned, the universe remains in a stationary state. On one occasion he made a sharp comment: "Things are the way they are because they were the way they were." The difference between a stationary universe and an evolving universe is shown schematically in Figure 28, where I again used the analogy of the inflating sphere. In both cases, we start with a sample of the universe in which the galaxies are represented by small specks. In the evolutionary scenario, after a while the galaxies have moved away from each other, reducing the global density of matter. In the stationary state scenario, however, new galaxies have been created so that the average density remains the same. The idea that matter is continuously created out of nothing can seem crazy at first sight. However, as Hoyle hastened to point out, neither did anyone know where the matter had come from in the cosmology of the big bang. To reach a stationary stage, Hoyle added to the equations of Einstein's general relativity a term that represented a "field of creation," whose effect was to create matter spontaneously. Hoyle did not know for sure, but he surmised: "The creation of neutrons seems to be the most likely possibility. It would be expected that the consequent disintegrations would provide the hydrogen needed by astrophysicists. In addition, the electric neutrality of the universe would be guaranteed. " The rate at which new atoms supposedly materialized from empty space was too small to be directly observable. The key virtue of the stationary stage scenario was that, as would be expected for all good scientific theory, it was falsifiable. As we saw in the previous chapter, Gamow even believed that all the chemical elements had been forged in that initial cosmic explosion. Against the big bang stood the model of the steady state, with its infinite past and its immutable cosmic scenario despite global expansion. However, the telescopes of the late 1940s were not powerful enough to detect an evolutionary trend of the type implied by the big bang model. Hubble hoped to begin observing remote galaxies in a short time. Unfortunately, not even the great mirror of the Monte Palomar telescope could capture enough light from normal and very distant galaxies to discriminate unambiguously between the two rival theories. The three were invited to present their ideas about the universe in a stationary state. It may be thought that the possibilities for physical evolution, and perhaps even life, also have no limit. These are the questions that are posed to the astronomer today. We hope that in a generation we can settle them with reasonable certainty. Paradoxically, although years later Hoyle would criticize natural selection, we can trace the origin of his thought to Darwin. We will return to this question later, when we discuss the possible reasons for the obstinacy with which Hoyle clung to the idea of ​​the steady state. When asked his opinion about the steady state, he said: I am impressed by the character of the cosmologists! The first signs of problems for the steady-state model came not from optical telescopes but from radio astronomy. The universe is essentially transparent to radio waves, and, therefore, the antennae of radio telescopes could pick up signals even from distant galaxies that could hardly be detected optically. In the 1950s, British and Australian scientists put to good use the experience gained during the Second World War by developing a robust radio astronomy program. Unlike Hoyle, Ryle was well-bred and had received the best a private education can offer. After some pioneering radio observations of the Sun in the late 1940s, Ryle and his group embarked on an ambitious program to detect radio sources beyond the solar system. However, since most of the sources did not correspond to a visible object, there was no way to determine their distance accurately. Ryle was of the opinion that they were peculiar stars within our galaxy, and he was willing to defend his opinion by heart and sword in a small gathering of radio astronomy enthusiasts. She was attended by both Hoyle and Gold, who did not hide their skepticism. At one point, Gold got up and challenged Ryle's conclusions. And he argued that the only alternative was for the sources to be so close that they were all within the relatively small thickness of the galactic disk. Hoyle responded by noting that of the half dozen sources that had been optically identified, five corresponded to external galaxies. Years later, he commented that Ryle had used the word "theoretical" in a way that implied an "inferior and detestable species." In this particular case, Gold and Hoyle ended up being right. The irony is that it was precisely the great distance to the radio sources that would later become the cornerstone of Ryle's argument in favor of an evolving universe and what led to the ruin of the steady-state theory. Figure 29 Ryle had yet to suffer another temporary embarrassment in his campaign against steady-state cosmology, although the sequence of events began with what seemed like a victory. The big bang and steady state models made different predictions about the distant universe. In a continuously evolving universe, this means that we observe that particular part of the universe when it was younger and, therefore, different. In the stationary state model, however, the universe has always existed in the same state. Consequently, it is to be expected that the remote parts of the universe have exactly the same appearance as our immediate cosmic environment. Ryle clung to the opportunity provided by this contrastable prediction and began to collect a large sample of radio sources, and to count how many there were at different intervals of intensity. What he found was that there were many more weak sources than strong ones. In other words, it seemed that the density of sources at distances of billions of light years was much greater than the current density in our environment. Ryle presented his results on May 6, 1955, taking advantage of the pronouncement of the prestigious Halley Conference. Ryle considered that this suggestion should be attributed to the fact that we had not understood the data well ». He then added that, judging by the information presented, it was "very premature to consider that the vast majority of weak sources were extremely distant." Bondi was also skeptical of the interpretation of Ryle's results. In his view, the uncertainties that still existed in the counts did not allow conclusive inferences. To reinforce his message, he reminded the audience that previous attempts to determine the geometry of the universe from galaxy counts had led to disparate conclusions. Needless to say, Hoyle did not agree with Ryle's interpretation. However, instead of engaging in long discussions, he decided to wait for better observations to come back to the surface and refute Ryle's find. To the surprise of many astronomers, these contradictory results eventually appeared. The consequences were clear to Australian astronomers: "The deductions of cosmological interest derived from the analysis are unfounded." Hoyle did not bother to rejoice. However, he did not miss the fact that the synthesis of most of the nuclei in the centers of the stars could be seen as a support of the stationary state thesis. Their efforts led to the production of the third generation of the Cambridge catalog of radio sources. In the early 1960s, Ryle's group even had at its disposal an entirely new radio astronomy observatory, funded by electronics company Mullard. The intellectual quarrels between Ryle and Hoyle did not cease, and culminated in a particularly unpleasant incident. It all started in early 1961 with an apparently innocent telephone call from the Mullard company. He had no doubt that the announcement would be related to the counting of radio sources according to their intensity, but he could not believe that he had been invited if the results contradicted the theory of the steady state. In his own words: Was he being insensitive to thinking that the new results that Ryle was about to announce would be adverse to my position? But if they were adverse, why would they invite me so blatantly? Which surely meant that Ryle was going to announce results in line with the stationary state theory, to end with an elegant apology for his disorienting previous reports. So I started to compose an equally elegant answer in my head. Regrettably, exactly what happened to Hoyle seemed unthinkable. He concluded by stating with certainty that the results now unambiguously showed a higher density of radio sources in the past, thus demonstrating that the stationary state theory was wrong. The stupefied Hoyle was only asked to comment on the results. Incredulous and humiliated, he could barely murmur a few sentences and escaped the event as soon as he could. The discovery of extremely active galaxies, in which the mass accretion towards central and supermassive black holes releases enough radiation to surpass that of the entire galaxy, strengthened the evidence against of a universe in a stationary state. These objects, known as quasars, were bright enough to be observed with optical telescopes. His observation allowed astronomers to use Hubble's law to determine the distance to these sources and show in a convincing way that quasars were, in fact, more common in the past than in the present. There was no way to avoid the conclusion that the universe was evolving and that in the past it had been more dense. It was then that the ban was finally opened and challenges to the steady-state model were no longer present. To their annoyance, they picked up a radio background noise that seemed to be ubiquitous, a microwave radiation that seemed to come from all directions. Ignorant of the context, at first Penzias and Wilson did not realize what they had discovered. Consequently, his correct interpretation of Penzias and Wilson's results literally transformed the big bang theory from an hypothesis in experimentally contrasted physics. As the universe expanded, the incredibly hot, dense and opaque fireball gradually cooled down to its current temperature of 2.7 Kelvin. Since then, observations of the cosmic microwave background have produced some of the most accurate measurements of cosmology. It was assumed that these iron filaments would have condensed from metallic vapors, for example in the material ejected by supernova explosions. Despite all of Hoyle's efforts, since the mid-1960s, most scientists have stopped paying attention to the steady-state theory. Hoyle's continued attempts to demonstrate that all conflicts between theory and new observations could be resolved seemed increasingly forced and less plausible. And what's worse, he seemed to have lost that "fine judgment" that he had once advised and supposedly distinguished him from who "becomes a nutcase." In that anachronistic lecture, Hoyle tried to persuade his audience that all the convincing empirical evidence accumulated on the big bang, all of them could still be explained from the stationary state theory. Bondi concluded elegantly: "So, my challenge of finding fossils has been answered long after I raised it." Hoyle, on the other hand, continued to defend a somewhat modified version of the stationary state theory. To express their disdainful opinion about the scientific establishment, in one of the pages of the book they presented a photograph of a flock of geese walking along a dirt road; the foot of the image read: "Thus we see the conformist approach of standard cosmology." By then, however, Hoyle had been so far away from conventional cosmology that very few even bothered to point out the shortcomings of the modified theory. To begin with, there is the question of the scale of the matter in whose context the error occurred. Darwin's error involved only one element of his theory. Kelvin's error concerned an assumption that was at the basis of a concrete calculation. Pauling's error affected a specific model. But Hoyle's error concerned nothing more and nothing less than an entire theory for the whole universe. The theory itself was bold, exceptionally intelligent, and conformed to all the observations that existed at that time. To answer this intriguing question, I started by seeking the opinion of some of Hoyle's younger students and collaborators. For example, Narlikar remembered how Hoyle had pointed out that all the other background radiations that had been observed had been associated with astrophysical objects. and that he saw no reason why the cosmic microwave background should be different and related to a singular event. Similarly, Around 1956, he thought that somehow the stars could produce the energy observed in the cosmic microwave background, if a way could be found to synthesize all the helium in the stars. On a more emotional plane, Narlikar believed that the fact that Hoyle was not a religious person could have contributed to his objection to a universe that had suddenly appeared. According to Eggleton, Hoyle insisted that the origin of life required much more time than the age of the universe inferred from the big bang. This is an interesting aspect about which we will return right away. Faulkner admitted that he had been puzzled by Hoyle's inflexible position toward the big bang. In his opinion, Hoyle "lost his pot a bit; as much in love with his creature as he was, he did not want to leave her. " He also made another interesting comment: that at the end of the 1960s, Hoyle's interest in what we might call "normative science" had declined, giving way to a more heterodox attitude. He fondly remembers Hoyle, always willing to support him, although part of Rees' work on the cosmic microwave background and on quasars helped to demolish the stationary state theory. Rees proposed two interesting potential causes of Hoyle's dissent. First, he emphasized the negative effects of scientific isolation. As these scientists never or almost never contradicted Hoyle, it is clear that this was not the best recipe for changing one's opinions. To my surprise, Rees told me that while Hoyle had always been very generous and quick to encourage him, he had almost never argued with him about science. In fact, Hoyle did not contrast his views on new scientific discoveries with any young cosmologist outside his circle of supporters. Rees made another interesting assessment that recalls one of Faulkner's comments. The best proof of this is found in some statements by Hoyle himself. It must have happened then that, for a hunt to be successful, the whole group would be needed. That is why the highest priority among scientists is not correction, but that everyone thinks the same. It is this primitive and perhaps instinctive motivation that creates the establishment. It is difficult to imagine a more tenacious defense of dissent from the general current of science. However, as Rees has pointed out, isolation has a price. Science does not progress along a straight line from A to B but follows a zigzag path modeled by critical reassessment and the interactions that help find the failures. By imposing academic isolation on himself, Hoyle refused those corrective forces. Hoyle's idiosyncratic ideas about the origin of life undoubtedly contributed to his not abandoning the theory of the steady state. Faced with problems of an order of super-astronomical complexity, biologists have turned to fairy tales. It is to provide that unlimited canvas so the steady-state theory is required, or at least I think so. In other words, Hoyle believed that an evolving universe, with the increase in disorder that accompanies it, does not provide the necessary conditions for something as ordered as biology to emerge. Nor did he believe that the age of the universe, inferred by the value of the Hubble constant, was sufficient for complex molecules to form. I must make it clear that the mainstream of evolutionary biology rejects this argument outright. Organisms that can reproduce themselves can generate complexity through successive changes, while inanimate objects are incapable of transmitting reproductive modifications. To go beyond these partial explanations of Hoyle's error, especially as regards his refusal to acknowledge his error, we need to understand the concept of negation better. The negation rarely arouses sympathy, especially in scientific circles. For good reasons, scientists see this type of negation as something contradictory to the investigative spirit, which implies that old theories have to give way to new ones when the experimental results so require. But research is carried out by human beings, and Sigmund Freud himself had postulated that humans have developed denial as a defense mechanism against external trauma or realities that threaten the ego. We are all familiar, for example, with the denial as the first of the five recognized stages of mourning. What may be less known is that the experience of being wrong in a major enterprise is a trauma of equal magnitude. The judicial system provides many examples that indicate that this is indeed the case. Denial offers the tormented mind a way to avoid reopening experiences that were already believed to be finally overcome. I have already pointed out several times that the idea of ​​a steady state universe was brilliant at the time it was proposed. In some aspects, the steady state universe is simply a universe in which inflation always occurs. Physicist Alan Guth proposed inflation in 1981 to explain, among other things, cosmic homogeneity and isotropy. These are precisely the properties that today are attributed to inflation. Hoyle's genius was also evident in the fact that he belonged to that small group of scientists who are able to investigate in parallel two mutually incongruous theories. Lord Rees once described Hoyle as "the most creative and original astrophysicist of his generation". In my humble condition as an astrophysicist, I must say that I completely agree. Although over time they proved wrong, Hoyle's theories were always dynamic, and never stopped stimulating entire fields and catalyzing new ideas. His theories of special and general relativity completely revolutionized our perspective on two of the most basic concepts of existence: space and time. And yet, the expression "greater error" has been intimately associated with one of the ideas of this scientist, the most iconic of all time. Only for a moment are the keys still: when they reach the highest point. Obviously, the gravitational attraction of the Earth is responsible for this behavior. However, if there is no other opposing force, gravity alone prevents the keys from hanging in the air. Two scientists independently demonstrated in the 1920s that the behavior of cosmic spacetime could be expected to be very similar. These important findings led to the theoretical context that allowed Lemaître and Hubble to discover that our universe is expanding. But we'd better start at the beginning. In 1917, Einstein himself was the first to attempt to understand the evolution of the entire universe in the light of his equations of general relativity. This effort initiated the transformation of cosmological problems from philosophical speculation to physics. The expansion of the universe had not yet been discovered. The observations of the astronomer Vesto Slipher of the redshift of the "nebulas" were not well known at the time, and even less understood. Convinced in 1917 that the cosmos was immutable and static on its greatest scale, Einstein had to find a way to prevent the universe described by his equations from collapsing under his own weight. To achieve a static configuration with a uniform distribution of matter, Einstein surmised that there must be some repulsive force that counteracts gravity precisely. In an influential article entitled "Cosmological Considerations on the General Theory of Relativity," he introduced equations a new term that gave rise to a surprising effect: a gravitational repulsion force! The cosmic repulsion supposedly acted throughout the universe, causing each part of space to push on any other part, just the opposite of what matter and energy do. As we will soon discover, mass and energy distort space-time in such a way that matter tends to aggregate. The new cosmological term had the effect of distorting space-time in the opposite direction, causing matter to separate. The value of the new constant introduced by Einstein determined the strength of the repulsion. This new constant was designated with the Greek letter lambda, Λ, and was called the cosmological constant. Einstein showed that he could choose the value of the cosmological constant so that the gravitational force of attraction and repulsion would be perfectly balanced, giving rise to a static, eternal, homogeneous and immutable universe of a fixed size. This model would later be known as the "Einstein universe". Einstein concluded his article with what turned out to be a comment pregnant with meaning: "This term is necessary only for the purpose of making possible a quasi-static distribution of matter, as required by the fact of the small speeds of the stars." As you will have noticed, Einstein speaks here of "star speeds", not of galaxies, since the existence and movements of the latter were still beyond the astronomical horizons of the time. With few exceptions, everything is very clear to the last bull. Cosmologists tend to emphasize the fact that by introducing the cosmological constant, Einstein lost a golden opportunity to make a spectacular prediction. If he had clung to his original equations, he could have predicted more than a decade in advance of Hubble's observations that the universe should be contracting or expanding. There is no doubt that this is true. However, as I will argue in the next chapter, the introduction of the cosmological constant could also have constituted an equally significant prediction. One wonders how Einstein could add this new term of repulsion in his equations without throwing down all the other successes of the general theory of relativity as an explanation of various disconcerting phenomena. For example, general relativity clarified the slight deviation of the orbit of the planet Mercury in each successive round around the Sun. Naturally, Einstein was aware that his cosmological constant could defeat the agreement with observations, and to avoid consequences undesirable, he modified his equations so that the cosmic repulsion would increase in proportion to the spatial separation. That is to say, the repulsion was imperceptible to the scale of distances of the solar system, but it became appreciable at great cosmological distances. This preserved all the experimental verifications of general relativity, which were based on measurements over relatively short distances. Inexplicably, Einstein made a surprising mistake in thinking that the cosmological constant would produce a static universe. You can understand this without the help of sophisticated mathematics. The force of repulsion increases with distance, while the force of attraction of gravity decreases with distance. Similarly, the slightest contraction would result in a total collapse. Eddington was the first to point out this error in 1930, although he attributed the insight to Lemaître. However, by then it was already a known fact that the universe was expanding, so this particular defect of the static universe of Einstein no longer had any interest. I must add that in his original article, Einstein did not specify the physical origin of the constant or its precise characteristics. We will return to these interesting questions in the next chapter. Despite these unresolved questions, Einstein was quite satisfied to have managed to construct a model of a static universe, a cosmos that he considered compatible with the dominant astronomical thought. At first, I was also satisfied with the cosmological constant for another reason. The new modification of the original gravitational field equations seemed to harmonize the theory with some philosophical principles that Einstein had previously used when conceiving general relativity. In particular, the original equations seemed to require what physicists call "boundary conditions," that is, the specification of a set of values ​​for physical quantities at infinite distances. This seemed incongruent with the "spirit of relativity," in Einstein's words. Unlike Newton's concepts of absolute space and time, one of the basic premises of general relativity was that there was no absolute reference system. In addition, Einstein had insisted that the distribution of matter and energy should determine the structure of space-time. For example, a universe in which the distribution of matter vanishes into nothingness would not have been satisfactory, since spacetime could not be adequately defined without the presence of mass or energy. However, much to Einstein's displeasure, the original equations admitted an empty space as a solution. That is why he was pleased to discover that the static universe did not need any contour condition, since it was finite and curved on itself like the surface of a sphere, without any contour. In this universe, a ray of light returned to its point of origin before beginning a new circuit. I am fully aware that readers who have a little forgotten general relativity will appreciate a reminder, so here is a brief review of its fundamental principles. Newton's goal was to present symmetrically absolute time and absolute space. There are many experiments that have since confirmed the fact that the time intervals measured by two observers that move in relation to each other do not coincide. Given the central role played by light in theory, special relativity was tailored to match the laws that described electricity and magnetism. In fact, Einstein titled the article of 1905 in which he presented his theory, "On the electrodynamics of bodies in motion." However, as early as 1907 he was becoming aware that special relativity was incompatible with Newton's gravity. The force of Newton's gravitation supposedly acted instantaneously throughout the space. Furthermore, the mere concept of simultaneity in the entire cosmos would require the existence of that same universal time that special relativity had invalidated with such care. Although Einstein would not have used this particular example in 1907 because he did not know it, he understood the principle perfectly. To overcome these difficulties, Einstein embarked on a rather circuitous route with many setbacks but that eventually led him to general relativity. General relativity is still considered by many to be the most ingenious physical theory ever articulated. The theory was based fundamentally on two original and profound ideas: the equivalence between gravity and acceleration, and the transformation of the space-time role of a passive spectator into a main protagonist in the drama of universal dynamics. First, by reflecting on the experience of a person in free fall in the gravitational field of Earth, Einstein understood that acceleration and gravity are essentially indistinguishable. In a similar way, the astronauts of the shuttle Spacecraft experienced "weightlessness" because both they and the shuttle were subject to the same acceleration with respect to Earth. This simple idea made an enormous impression on me. He pushed me towards a theory of gravitation. " Einstein's second idea completely changed Newton's gravity. Einstein defined gravity as the curvature of space-time. Neither does light travel in a straight line, but it curves in distorted space near the large masses. Figure 32 shows a letter written by Einstein in 1913, when he was still developing the theory. This crucial prediction was tested for the first time in 1919, during an eclipse of the Sun. Time is also "curved" in general relativity: clocks located near very massive bodies tick more slowly than those that are far away. Matter and energy become eternal companions of space and time. By introducing general relativity, Einstein solved in a spectacular way the problem of the propagation of the force of gravity faster than light, the problem that disturbed Newton's theory. In general relativity, the transmission speed is reduced to how fast the waves in the space-time fabric can travel from one place to another. Einstein showed that these distortions and expansions, the geometric manifestations of gravity, travel exactly at the speed of light. In other words, changes in the gravitational field can not be transmitted instantaneously. As happy as Einstein was with his cosmological constant and his static universe, his satisfaction would soon evaporate, as new scientific discoveries made the concept of a static universe unsustainable. First there were some theoretical disappointments, the earliest of which struck almost immediately. A universe devoid of matter certainly entered into contradiction with Einstein's aspiration to link the geometry of the universe with its mass and energy content. On the other hand, De Sitter himself was quite satisfied, since from the first moment he had objected to the introduction of the cosmological constant. In a letter to Einstein dated March 20, 1917, he argued that lambda might be desirable from a philosophical point of view, but not from a physical point of view. What disturbed him most was the fact that the cosmological constant could not be determined empirically. At that time, Einstein himself kept his mind open to all options. The general theory of relativity allows the inclusion of Λgμν in the field equations. Conviction is a good motivator, but a bad judge! As we will see in the next chapter, Einstein predicted precisely what astronomers would achieve eighty-one years later. But in 1917, the setbacks kept coming. Although de Sitter's model at first glance seemed to be static, it was only an illusion. The second theoretical blow came from Aleksandr Friedmann. As has already been said, Friedmann showed in 1922 that Einstein's equations allowed for non-static solutions in which the universe expanded or contracted. But the most serious challenge came from the observations. Einstein immediately understood the implications of an expanding universe, in which the pull of gravity is limited to slowing expansion. After Hubble's discovery he was forced to admit that there was no longer any need for a delicate balancing act between attraction and repulsion; consequently, the cosmological constant could be derived from the equations. In an article published in 1931, he formally abandoned the term, because "the theory of relativity seems to satisfy more naturally the recent results obtained by Hubble without needing the term". Now it seems that in the dynamic case this extreme can be achieved without the introduction of Λ ». Surprisingly, however, the cosmological constant that had already been banished returned to the drum in 1998. However, if you read almost any chronicle of the history of the cosmological constant, you will almost certainly find the story that Einstein denounced the introduction of this constant in its equation as its "greatest error". Recall that Gamow was responsible for the idea of ​​nucleosynthesis during the big bang, and also some of the first reflections on the genetic code. Gamow explained the "biggest mistake" in two places. Much later, while discussing cosmological problems with Einstein, he told me that introducing the cosmological term had been the biggest mistake he had ever made in his life. " However, as it is well known that Gamow liked to adorn many of his anecdotes, I decided to dig a little more with the intention of establishing the authenticity of the story. My motivation to investigate this particular event was increased because the recent resurrection of the cosmological constant has turned the expression "greatest error" into one of the most cited of Einstein. I started by trying to figure out if Gamow really intended to quote Einstein directly. The use of the quotes in the word "error" seems to suggest at least that Gamow intended to convey a true quote. The fact that Gamow used exactly the same expression on two occasions also indicates that he was trying to give the impression, at least, that he was directly quoting Einstein. Furthermore, as will be seen, Gamow here reveals his own prejudice against the cosmological constant by using the expression "his ugly head." Interestingly, I discovered that Einstein actually used the expression "I made a big mistake in my life," but in a totally different context. Obviously, this fact by itself does not exclude the possibility that Einstein used the expression "the greatest error" also in a scientific context, although the language used in the conversation with Pauling gives reason to think. Unfortunately, as I discovered, the reality was quite different. At one point, he asked the civil and infantry divisions if Einstein worked for them. The answer, in both cases, was negative. They explained to Brunauer that Einstein was a pacifist and that he was not "interested in anything practical". Brunauer was also the officer who recruited Gamow on September 20, 1943. The one who visited him the most was me, and that happened more or less every two months. " The scrutiny of the little formal correspondence between Gamow and Einstein only reinforced my impression that the two men had never been close friends. In one of them, Gamow asks Einstein for his opinion on the idea that the universe as a whole could have a nonzero angular momentum. In another, Gamow attached his article on the synthesis of the elements in the big bang. Einstein responded politely to Gamow's letters, but he never mentions the cosmological constant. Perhaps the most revealing information in all the correspondence is a comment that Gamow added to Einstein's letter of August 4, 1946. Einstein informed Gamow that he had read the manuscript on nucleosynthesis in the big bang and was "convinced that the abundance of elements based on atomic weight is a starting point of extreme importance for cosmogonic speculations ». Gamow wrote at the bottom of the letter: "Of course, the old man agrees with almost everything lately." To further explore this question, I examined Einstein's articles, books and personal correspondence written after 1932, in search of another mention of the cosmological constant. Einstein's writings leave no doubt that after the discovery of the expansion of the cosmos, he was not happy to have introduced the cosmological constant. The book does not even mention the cosmological constant. As Friedman showed for the first time, it can be reconciled a finite density of mass in all places with the original form of the gravity equations if the temporal variability of the metric distance between two points of mass is admitted. In other words, Einstein recognized that the principles of general relativity allowed adding a term of cosmological repulsion to the equations, but since it was not necessary, he appealed to mathematical simplicity to reject it. Then he complemented his comment with a footnote: If Hubble's expansion had been discovered at the time of the creation of the general theory of relativity, the cosmological member would not have entered. Today it seems much less justified to introduce that term in the field equations, since its introduction loses its only original justification, which is to contribute to a natural solution to the cosmological problem. In Appendix 4 of his book on the theory of special and general relativity, Einstein also points out that the cosmological term "was not required by the theory as such nor did it seem natural from a theoretical point of view". According to its author, who was a member of the circle closest to Einstein, he would later have rejected the cosmological term as "superfluous and no longer justified". Pauli also commented that he himself accepted Einstein's point of view. But nowhere does the slightest allusion to the "greatest error" appear. An analysis of all the records of Einstein on the cosmological constant makes it absolutely clear that he rejected it only on the basis of two criteria: a simplicity of aesthetic motivation and regret for having introduced it with a wrong motivation. As I have already pointed out in chapter 2, simplicity in the principles involved in a theory is one of the characteristics that make it beautiful. Einstein's experience during the development of general relativity had only reinforced his confidence in mathematical principles. Adding another constant to the equations did not give Einstein any reductionist beauty, but he was willing to tolerate it as long as it seemed imposed on him by what he perceived as a static reality. At the moment when the cosmos was discovered to expand dynamically, Einstein was pleased to free his theory of what he now considered an excess of charge. In that letter, Lemaître did everything he could to convince Einstein that the cosmological constant was actually necessary to explain a certain number of cosmological facts, including the age of the universe. Einstein admitted at the beginning that "the introduction of the term Λ offers a possibility" of avoiding a contradiction with the geological ages. It must be remembered that the age of the universe implied by Hubble's original observations was much less than the age of the Earth. Lemaître believed that he could solve this conflict if the equations included the cosmological constant. However, Einstein repeated his reductionist arguments to justify his reluctance to accept the cosmological constant. What he wrote was this: Since I introduced this term I have had a bad conscience. But at that time I could not see another way to contemplate the fact of the existence of a finite average density of matter. It seemed really ugly to me that the law of the gravitational field was composed of two logically independent terms linked by a sum. On the justification of these impressions about logical simplicity, it is difficult to argue. I can not help but feel them strongly and I am unable to believe that something so ugly happens in nature. In other words, the original motivation no longer existed, and Einstein believed that aesthetic simplicity was violated, so he did not believe that nature needed a cosmological constant. It is true that he felt uncomfortable with the concept, and that in 1919 he had said it was "seriously damaging to the formal beauty of the theory". But general relativity definitely allowed the introduction of the cosmological term without violating any of the fundamental principles on which the theory was based. In this sense, Einstein knew that this was not at all an error before even recent discoveries concerning the cosmological constant. The experience acquired in theoretical physics since the time of Einstein has shown us that any term allowed by basic principles will probably prove necessary. Reductionism applies to the fundamentals, not to the specific form of the equations. In conclusion, it is practically impossible to prove beyond doubt that someone did not say something. That extreme was, in my humble opinion, almost certainly a hyperbole of Gamow. My conclusion is that Gamow probably invented it! One can ask why this occurrence of Gamow has become one of the most memorable stories in the folklore of physics. The answer, I believe, is threefold. On the one hand, people in general, and the media in particular, love superlatives. News about science is always more attractive if it includes expressions such as "the quickest", "the furthest thing" or "the first thing". Human as he was, Einstein was wrong many times, but none of his other mistakes gave rise to headlines like the one that was described as his biggest mistake. Second, Einstein has become the incarnation of genius, the man who only with the help of his intellectual abilities discovered how the universe works. It has been said of the ancient Greeks that they found the universe a mystery and they left it a polis. From the perspective of modern cosmology, this aphorism fits Einstein even better. The fact that even a scientific power of this caliber is fallible is both fascinating and a lesson in humility, and how science actually progresses. Not even the most impressive minds are infallible; they just prepare the way for the next level of understanding. The third reason for the popularity of the cosmological constant, which has sometimes been said to be the most famous term rigged in the history of science, is that it has turned out to be a true survivor. And what's more, this ostensible "error" has not only refused to die, but has become the real center of attention over the last decade. Lemaître, apart from his general feeling that he should not be rejected Λ just because he had been introduced for the wrong reasons, had two other compelling motivations for wanting to keep the cosmological constant alive. The first is that it offered a potential solution to the discrepancy between the young age of the universe and the geological time scales. In some of Lemaître's models, a universe with a cosmological constant could last a long time in a phase of stagnation, thus prolonging the age of the cosmos. The second reason for Lemaître to defend Λ had to do with his ideas about the formation of galaxies. He surmised that during the stagnation phase, the regions of higher density were amplified and grew forming protogalaxies. Although at the end of the decade of 1960 it was demonstrated that this particular idea was not sustained, it helped to maintain the cosmological constant in suspense, but not discarded. Arthur Eddington was another of the great defenders of the cosmological constant. So much so that, in fact, he once declared as a challenge: "The return to the old vision is unthinkable. Abandoning the cosmological constant seems to me like returning to Newtonian theory. " The main reason for Eddington's defense was that he believed that repulsive gravity was the true explanation for the observed expansion of the universe. Several competing explanations for the recession of nebulae that do not accept it as a clue have been proposed of a repulsive force. These necessarily adopt the second alternative, and postulate that great speeds have existed from the beginning. That could be true; but we can hardly describe them as an explanation of the great speeds. In other words, Eddington recognized that even without the cosmological constant, general relativity allowed for an expanding universe as a solution. However, this solution had to assume that the cosmos had started at high speeds, without providing an explanation of those particular initial conditions. The inflationary model, the idea that the universe experienced a phenomenal expansion when it was barely a fraction of a second, was born of a similar dissatisfaction with the need to depend on specific initial conditions as the cause of observed cosmic effects. For example, it is assumed that inflation inflated both the fabric of the universe that flattened the cosmic geometry. At the same time, it is believed that inflation was the agent that took quantum fluctuations of subatomic size in the density of matter and inflated them to cosmological scales. These were the density increases that would later become the seeds for the formation of the cosmic structure. As I have already pointed out in Chapter 9, Hoyle's steady-state model of 1948 reproduced some of the characteristics of inflationary cosmology. The field term that Hoyle had introduced in Einstein's equations for the continuous creation of matter acted in various ways as a cosmological constant. In particular, it caused the universe to expand exponentially. In the first case, McCrea observed, the cosmological constant becomes an annoyance, since its value can not be determined from within the theory. In the second case, he said with great insight, the value of the cosmological constant can be fixed through the connection between general relativity and other relevant branches of physics. As we will soon see, physicists try to understand the nature of the cosmological constant precisely through their efforts to unify the big and the small, general relativity and quantum physics. As with so many other physical concepts, Newton was the first to consider the effects of a similar force. In his famous Principia he discussed, in addition to the normal force of gravity, another force that "increases with a simple reason of distance." Newton could demonstrate that for this type of force, as for gravity, a spherical mass could be considered as if the whole mass were concentrated in its center. However, he did not thoroughly examine the problem in the case where two forces act in tandem. Newton would have paid more attention to this scenario if he had noticed, or had taken more seriously, the fact that his law of gravitation could not be easily applied to the entire universe. If you try to calculate the gravitational force at any point in a cosmos of infinite extension and uniform density, the calculation yields no definite value. The situation is a little like trying to calculate the sum of the infinite sequence 1 - 1 + 1 - 1 + 1 - 1, and so on. The result depends on where it stops. The ubiquitous Lord Kelvin, for example, proposed that the ether did not participate in any way in gravitation. In the end, all these ancient attempts culminated in Einstein's theory of general relativity and the subsequent expansion of his equations with the cosmological constant. But at the end of the 1960s, astronomical observations provided the necessary impetus for this phoenix to rise from its ashes. The astronomers found what seemed like an excess in the count of quasars clustered around a time of about ten billion years ago. This excess density could be explained if somehow the size of the universe had been maintained during a time with the dimensions that it had then, of approximately a third of its current extension. In fact, some astrophysicists showed that it was possible to obtain that astronomical idle from Lemaître's model, since this included a quiet phase of stagnation or quasi-static. Although this particular model did not survive much longer, it managed to draw attention to a possible interpretation of the cosmological constant: the energy density of empty space. This idea is so fundamental and at the same time so disconcerting that it deserves an explanation. From the largest scales to the smallest By definition, mathematical equations are expressions or propositions that affirm the equality of two quantities. For example, Einstein's most famous equation, E = mc2, expresses the fact that the energy associated with a given mass is equal to the product of that mass times the square of the speed of light. This is a clear manifestation of the essentials of general relativity: matter and energy determine the geometry of spacetime, which is the expression of gravity. When he introduced the cosmological constant, Einstein added it to the left side, because he conceived it as one more property of spacetime. However, if this term is moved to the right side, it acquires a new physical meaning. Now instead of describing geometry, the cosmological term becomes part of the energy balance of the cosmos. However, the characteristics of this new form of energy differ from those of the energy associated with matter and radiation in two important ways. First, while the density of matter decreases as the universe expands, the density of the energy corresponding to the cosmological constant remains eternally constant. And if that was not strange enough, this new form of energy has negative pressure! This is precisely the attribute of the cosmological constant that Einstein had taken advantage of in his attempt to keep the universe static. The basic symmetry of general relativity - that the laws of nature should make the same predictions in different frames of reference - implies that only vacuum can have an energy density that does not dissolve with expansion. Well, how could an empty space be diluted more? Not in the strange world of quantum mechanics. When one enters the domain of the subatomic, emptiness ceases to be nothing. In fact, it is a frenzy of pairs of virtual particles and antiparticles that appear and disappear from existence on a very fugacious time scale. Consequently, even empty space can be endowed with energy density and, therefore, be a source of gravity. This interpretation is completely different from the one suggested by Einstein, who saw his cosmological constant as a potential peculiarity of space-time that served to describe the universe at its greatest cosmic scales. The identification of the cosmological constant with the energy of empty space, although mathematically equivalent, immediately relates it to the most minute subatomic scales, which are the domain of quantum mechanics. McCrea's observation of 1971 that perhaps the value of the cosmological constant could be determined from a physics external to classical general relativity turned out to be truly visionary. I must point out that Einstein himself made an interesting attempt to connect the cosmological constant with the elementary particles. In what could be seen as his first foray into the battlefield of attempts to unify gravity and electromagnetism, Einstein proposed in 1919 that perhaps electrically charged particles are held together by gravitational forces. This led to an electromagnetic restriction on the value of the cosmological constant. Apart from a brief note on the subject in 1927, Einstein He never dealt with this question again. The idea that the vacuum is not really empty but can contain a large amount of energy is not really new. In the 1920s, those dealing with quantum mechanics, and especially Wolfgang Pauli, discussed the fact that in the quantum domain the lowest possible energy of any field was not zero. However, not even Pauli's conclusions found an echo in cosmological considerations. The first person who specifically connected the cosmological constant with the energy of empty space was Lemaître. And he went on to say that the energy density of the vacuum must be associated with a negative pressure, and that "this is essentially the meaning of the cosmological constant". In 1967 Zeldovich made the first genuine attempt to calculate the contribution of vacuum shocks to the value of the cosmological constant. Figure 37 Unfortunately, along the way he made some ad hoc assumptions without explaining his reasoning. In particular, Zeldovich assumed that most of the zero-point energies in one way or another are canceled, leaving only the gravitational interaction between the virtual particles in the vacuum. Even with this unjustified omission, the value it obtained was totally unacceptable: approximately one billion times greater than the energy density of all matter and the radiation of the observable universe. For example, physicists first naively assumed that they could add the zero-point energies to the scale at which our theory of gravity breaks down. That is, to the point where the universe is so small that a quantum theory of reality is needed. However, when the particle physicists made the estimate, they found a value about 123 orders of magnitude greater than the density of the combined cosmic energy of matter and radiation. Obviously, if the energy density of the empty space were really so high, not only would there have been no galaxies and stars, but the enormous repulsion would have instantly torn apart atoms and nuclei. In a desperate attempt to correct that estimate based on conjecture, physicists used principles of symmetry to speculate that the sum of the energies of the zero point should be trimmed to a lower energy. To his despair, although the revised estimate showed a considerably lower value, the energy was still too high at about 53 orders of magnitude. As will be understood, mathematically speaking that is the exact equivalent of what Einstein did when he simply removed the cosmological constant from his equations. Assuming that the cosmological constant disappears means that it is not necessary to include a term of repulsion in the equations. The reasoning, however, was completely different. Hubble's discovery of cosmic expansion soon subverted Einstein's original motivation to introduce the cosmological constant. Even so, many physicists considered it unjustified to assign the specific value of zero to lambda for the mere reason of brevity or as a remedy to a "bad conscience." On the other hand, in its modern incarnation as the energy of empty space, the cosmological constant seems to be mandatory from the perspective of quantum mechanics, unless all the different quantum fluctuations somehow agree to add zero. This unfinished and frustrating situation lasted until 1998, when new astronomical observations turned the whole question into what could possibly be described as the greatest challenge facing physics today. The accelerated universe Since the Hubble observations of the late 1920s, we know that we live in an expanding universe. Einstein's theory of general relativity provided a natural interpretation of Hubble's discoveries: expansion is a stretch from the warp of space-time itself. However, just as the gravity of the Earth slows the movement of any object thrown upward, one would expect the cosmic expansion to be slowed down by the mutual gravitational attraction of all the matter and energy of the universe. But in 1998 two teams of astronomers working independently discovered that the cosmic expansion is not slowing down; on the contrary, during the last six billion years, it has been accelerating! The discovery of the accelerated expansion was at first a complete surprise, because it implied some kind of repulsive force that impelled the expansion of the universe. To reach their surprising conclusion, astronomers relied on observations of very bright stellar explosions known as type 1a supernovae. These stellar explosions are so luminous that they can be detected up to more than half the distance of the observable universe. Type 1a supernovas are very rare: they only occur approximately once per century in a given galaxy. As a result, the team had to examine thousands of galaxies to get a sample of a few dozen supernovas. Armed with these data, they compared their results with the predictions of the Hubble linear law. A meticulous analysis showed that the results implied a cosmic acceleration during the last six billion years, approximately. Since its discovery in 1998, new pieces of this puzzle have appeared, and all of them corroborate the fact that some new form of energy with a very smoothed distribution produces a repulsive gravity that pushes the universe to accelerate. First, the supernova sample has increased considerably and now covers a wider range of distances, which places the finding on a firmer footing. Secondly, Riess and his collaborators have shown by means of later observations that the current acceleration phase of the universe of the last six billion years was preceded by an earlier period of deceleration. From all this emerges a beautiful and persuasive image: when the universe was smaller and much more dense, gravity had the winning hand and slowed the expansion. In contrast, densities of matter and radiation were extremely high in the early universe, but they have only diminished as the universe has expanded. When the energy density of the matter fell below that of the vacuum, acceleration began. After taking into account all the restrictions imposed by the observations, the astronomers were able to determine in a precise way the current contribution of the supposed vacuum energy with respect to the total energy balance of the cosmos. For example, the electron has a spin of ½, and its supersymmetric "shadow" would supposedly have a spin of 0. If, in addition, all supermasters had the same mass as their known partners, the theory predicts that the contribution of each of these couples would be canceled. When this fact is taken into account, the total contribution to vacuum energy is greater than that observed in about 53 orders of magnitude. One could still harbor the hope that some supersymmetry that no one has yet imagined will produce the desired cancellation. However, the measurement of cosmic acceleration is a milestone precisely because it shows us that this is not too likely. The extraordinarily small, but nonzero, value of the cosmological constant has convinced many theorists that it is futile to seek an explanation based on symmetry arguments. After all, how can a number be reduced to 0.0000000000000000000000000000000000000000001 of its original value without canceling it altogether? This remedy seems to require a very fine level of adjustment that most physicists are not willing to accept. So, what outputs do we have left? In his desperation, some physicists have taken to resorting to one of the most controversial concepts in the history of science: anthropic reasoning, a form of argumentation in which the mere existence of human observers is considered part of the explanation. Einstein had nothing to do with this development, but it was the cosmological constant that has convinced quite a few of today's leading theorists to take this condition seriously. Let's see a succinct explanation of the knot of all this commotion. Anthropic Reasoning Almost everyone would agree that the question "Is there intelligent extraterrestrial life?" Is one of the questions of current science that most curiosity arouses. That this is a reasonable question to ask is due to a true fact: the properties of our universe, and the laws that govern it, have allowed the appearance of complex life. Obviously, the biological peculiarities of humans depend crucially on the properties of the Earth and its history, but there are some basic requirements that seem necessary for any form of intelligent life to materialize. For example, galaxies composed of stars, with planets in orbit around some of these stars, seem a reasonably generic requirement. In the same way, the nucleosynthesis inside the stars had to forge the fundamental pieces of life: atoms such as carbon, oxygen and iron. In principle, one could imagine "counterfactual" universes that are not conducive to the emergence of complexity. However, the value of a parameter, the cosmological constant, is a thousand times higher in this hypothetical universe. In such a universe, the repulsive force associated with the cosmological constant would cause such a rapid expansion that no galaxy would form. As we have already seen, the question that we have inherited from Einstein is the following: why should there be even a cosmological constant? Modern physics has transformed that question into this one: why should empty space exert a force of repulsion? However, as a result of the discovery of accelerated expansion, what we ask ourselves is: why is the cosmological constant so small? In other words, imagine that there is a huge set of universes and that the cosmological constant can assume different values ​​in different universes. In some of them, as in the counterfactual universe that we have proposed with a thousand times greater lambda, complexity and life would not have developed. Humans are, of course, in one of the "biophile" universes. Thus, there would be no grand unified theory of the fundamental forces that could fix the value of the cosmological constant; This would be determined by the simple requirement that it is within the range allowed by the evolution of humans. In a universe with a cosmological constant too high, there would be no one to wonder about its value. The physicist Brandon Carter, who first presented this argument in the 1970s, dubbed it "anthropic principle." Consistent with this, attempts to define the "pro-life" domains are described as anthropic argumentation or reasoning. Some of the nominal "constants of nature" are accidental, not fundamental. Our universe is just one more member of a gigantic set of universes. Let's examine each of these points very briefly and try to evaluate its viability. Let's see some examples that show this effect. Let's imagine that we want to test an investment strategy by examining the behavior of a large group of values ​​versus twenty years of data. One might be tempted to include in the study only the values ​​for which complete information is available for the twenty years of the study period. However, eliminating the values ​​that stopped trading during that period would produce biased results, because those are precisely the values ​​that did not survive the market. During the Second World War, the Austro-Hungarian Jewish mathematician Abraham Wald showed that he understood very well the selection bias. To the surprise of his superiors, Wald recommended shielding the parts that showed no damage. Astronomers are well aware of Malmquist's bias. When astronomers do a survey of stars or galaxies, their telescopes are only sensitive above a certain brightness. For example, we could not discover that our universe does not contain carbon, since we are a life form based on carbon. At first, most researchers did not see Carter's anthropic reasoning more than an obvious and trivial assertion. However, during the last couple of decades, the anthropic principle has gained some popularity. Today more than one prominent theorist accepts the fact that in the context of a multiverse, the anthropic argument can lead us to a natural explanation of the otherwise disconcerting value of the cosmological constant. To recapitulate the argument, if Lambda were much larger, the cosmic acceleration would have overcome gravity before the galaxies had even had the opportunity to form. The fact that we are in the galaxy of the Milky Way necessarily biases our observations towards low values ​​of the cosmological constant in our universe. But to what extent is it reasonable to assume that a physical constant is "accidental"? A historical example can help us clarify this concept. In this book, Kepler believed he had found the solution to two puzzling cosmic enigmas: why there were precisely six planets in the solar system, and what determined the size of the planetary orbits. Even for his time, Kepler's answers to these questions bordered on extravagance. What he did was to build a model of the solar system by fitting one into the other five regular bodies known as Platonic solids. Along with an outer sphere that corresponded to the fixed stars, the solids determined precisely six spaces that, for Kepler, "explained" the number of planets. By choosing a particular order to fit one solid into another, Kepler managed to roughly reproduce the relative sizes of the orbits of the solar system. The main problem with Kepler's model was not in the geometric details; after all, Kepler used the mathematics he knew to explain the available observations. The main failure was that Kepler did not realize that neither the number of planets nor the sizes of their orbits were fundamental quantities, that is to say, quantities that can be explained from first principles. Although the laws of physics no doubt govern the general process of the formation of the planets from a protoplanetary disk of gas and dust, it is the particular environment of a young stellar object that determines the final result. Today we know that in the Milky Way there are millions of extrasolar planets, and each planetary system is different in terms of its members and orbital properties. Both the number of planets and the dimensions of their circuits are accidental, just as it is accidental, for example, the precise shape of a snowflake. There is a particular magnitude in the solar system that has been crucial to our existence: the distance between the Sun and the Earth. Our planet is located in the habitable zone around the Sun, a narrow circumstellar region where it is possible the existence of liquid water on the surface of the planet. At a lower distance the water evaporates and at a greater distance, it freezes. Water was essential for the appearance of life on Earth because the molecules could be combined easily in the «broth» of the young Earth and form long chains sheltered from the harmful ultraviolet radiation. Kepler was obsessed with the idea of ​​finding an explanation based on first principles for the Earth-Sun distance, but he was wrong in his obstinacy. There was nothing that, in principle, prevented the Earth from forming at a different distance. But if that distance had been significantly greater or less, there would have been no Kepler who wondered why. Among the thousands of millions of solar systems in the Milky Way, many probably do not have life because they do not have any planet in the habitable zone around the star. Although the laws of physics determined the orbit of the Earth, there is no deeper explanation for the radius of its orbit other than the fact that if it had been very different, we would not be here. This brings us to the last necessary ingredient of the anthropic argument. For the explanation of the value of the cosmological constant as an accidental magnitude in a multiverse to have any meaning, there must be a multiverse. We do not know, but that to some insightful physicists has not deprived them of speculation. What we do know is that in a theoretical scenario known as "eternal inflation", a drastic stretching of space-time can produce an eternal and infinite multiverse. Supposedly, this multiverse would continually generate regions of inflation, which would evolve towards particular "pocket universes". The big bang with which our particular "pocket universe" appeared is just one more event in a much larger scheme of a substrate that expands exponentially. Some versions of "string theory" also allow for a huge variety of universes, each potentially characterized by different values ​​of physical constants. If this speculative scenario were correct, what we have traditionally called "the universe" would be nothing more than a piece of space-time within a vast cosmic landscape. I do not want anyone to get the impression that all physicists believe that the solution to the puzzle of the energy of empty space will come to us through anthropic reasoning. The simple mention of "multiverse" and "anthropic" makes some physicists turn up the tension. There are two main reasons for this adverse reaction. The first is that, as has already been mentioned in chapter 9, since the pioneering work of the philosopher of science Karl Popper, for a scientific theory to deserve this qualification, it must be falsifiable through experiments or observations. This requirement has become the foundation of the "scientific method". However, the boundary between what we define as observable and unobservable is not clear. Think, for example, of the "particle horizon", the surface that surrounds us from which the radiation emitted in the big bang arrives. In this universe, any object that is currently beyond the horizon will continue to be forever. As its recession speed approaches the speed of light, its radiation will stretch until its wavelength is greater than the size of the universe. Thus, even our own accelerated universe contains objects that neither we nor the future generations of astronomers will be able to observe. However, it does not occur to us to think that these objects belong to the domain of metaphysics. So, what could give us confidence in potentially unobservable universes? The answer is a natural extension of the scientific method: we can believe in its existence if predicted by a theory that acquires credibility because it is corroborated in other ways. We believe in the properties of black holes because their existence is predicted by general relativity, a theory that has been put to the test in numerous experiments. The second main reason for hostile passions that provokes the anthropic reasoning is that for some scientists it indicates the "end of physics". There is no doubt that Einstein also harbored this hope. As is well known, Einstein was not at all comfortable with the probabilistic nature of quantum mechanics, although he fully appreciated his successes. But an inner voice tells me that it is not yet the real one. The theory gives much of itself, but it barely brings us closer to the secrets of the Old Man. I, in any case, am convinced that He does not play dice. The concept of accidental variables in a potentially unobservable multiverse would surely have disturbed Einstein even more. Note, however, that Einstein's reservations toward quantum mechanics were born more from psychology than from the hard core of physics. The same thing could happen at the end with the objections to anthropic reasoning. Despite the experience of recent decades, nothing guarantees us that physical reality will be lent to be explained in its entirety from first principles. The search for such descriptions can be as futile as Kepler's search for a beautiful geometric model of the solar system. What we have traditionally called fundamental constants, even what we have by laws of nature, could turn out to be nothing more than accidental variables and secondary laws only for our own universe. Anthropic thinking about the nature of the cosmological constant demonstrates the profound impact that Einstein's seemingly innocent attempt to describe a static universe still has on avant-garde physics. Although 1905 was undoubtedly a prodigious year for Einstein, he actually had a second annus mirabilis from November 1915 to February 1917. Thus was born modern cosmology, and with it the cosmological constant. I trust that the data presented in chapter 10 would have convinced the reader that Einstein would most likely never use the term "greatest error". Furthermore, the introduction of the cosmological constant was not an error at all, since the principles of general relativity gave the green light to that term. To believe that the constant would guarantee a static universe was undoubtedly an unfortunate mistake, but not of the magnitude of those considered in this book. Recall once again that removing the term from the equations is equivalent to arbitrarily assigning the value zero to lambda. In doing so, Einstein restricted the generality of his theory, paying a high price for the conciseness of the equations, even before the recent discovery of cosmic acceleration. Simplicity is a virtue when applied to fundamental principles, not to the form of equations. In the case of the cosmological constant, Einstein was wrong to sacrifice generality on the altar of superficial elegance. A simple analogy will help clarify this concept. Galileo still lived a prisoner of the aesthetic ideals of antiquity, according to which the orbits had to be perfectly symmetrical. But physics has shown that it is an unjustified prejudice. Actually there is a deeper symmetry than the simple symmetry of the forms. Newton's law of universal gravitation says that elliptical orbits can have any orientation in space. In other words, the law does not change if we measure the directions with respect to the north, the south or the nearest star: it is symmetric with respect to the rotation. When Einstein said of the cosmological constant that he was "ugly," he proved to suffer the same kind of bias and myopia. The errors of a genius More than 20 percent of Einstein's original articles contain errors of one kind or another. In several cases, although he makes mistakes along the way, the final result is still correct. This is often the mark that distinguishes the great theorists: they are guided more by intuition than by formalism. A theorist is wrong Two ways: The devil takes it from the snout with false hypotheses. Your arguments are wrong and sloppy. Although Einstein himself certainly made mistakes of the two types, his clarity of vision for physics, unparalleled, showed him many times the path that led to the correct answers. Unfortunately we, mere mortals, can not imitate or acquire their talent. No less than six Nobel prizes contributed to this book. It is possible that the discovery of the cosmological constant is one of those cases. " Einstein himself remained unconvinced. And he goes on to say that following Hubble's discovery of the cosmic expansion and Friedmann's demonstration that the expansion could be accommodated within the original equations, it seemed to him that the introduction of lambda "in the present is not justified." Notice, by the way, that although Einstein writes these comments not long after his correspondence with Gamow, he does not allude at any time to his "greatest error". On the one hand, it can be argued that Einstein was right in refusing to add to his equations a term that observations did not absolutely require. On the other hand, Einstein had already missed an opportunity to predict the cosmic expansion by relying on the lack of evidence of stellar movements. By denouncing the cosmological constant, he lost a second chance, this time to predict the acceleration of the universe! In a normal scientist, two oversights like these would undoubtedly be seen as a lack of intuition, a conclusion that we can not reach in the case of Einstein. Einstein's failures remind us that human logic is not error-proof, although whoever exercises it is a monumental genius. Einstein continued to think of a unified theory and the nature of physical reality until the end of his days. As early as 1940, he foresaw the difficulties faced by current string theorists: "The two systems do not directly contradict each other, but seem to be unfit to merge into a single, unified theory." With or without errors, possibly no one in recent memory has aspired more to the truth than Albert Einstein. As methods and tools of observation and experimentation improve, theories can be refuted or metamorphosed into new ways that incorporate some of the old ideas. Darwin's theory of the evolution of life through natural selection was only reinforced by the application of modern genetics. Newton's theory of gravity continues to live as a limited case within the framework of general relativity. The road to a "new and improved" theory is not a path of roses, and progress is definitely not a fast race to the truth. His errors are deliberate and portals of discoveries, "he pretended to be provocative with the first part of his comment. However, as we have seen in this book, there is no doubt that the errors of the geniuses are portals of discoveries. The most famous of which is "never get involved in a war in Asia" ». We will all agree that recent history has shown that this is good advice. The examples in this book show that this "commandment" can also be taken as a recommendation on how to avoid big mistakes... although I am not entirely sure of that. Doubt is often seen as a sign of weakness, but it is also an effective defense mechanism and an essential operating principle in science. Kelvin, Hoyle and Einstein have also revealed another fascinating aspect of human nature. In the same way that people are sometimes reluctant to admit their mistakes, other times they obstinately oppose new ideas. One of the things they discovered was that people tend to trust more in their intuitive understanding, than in real data. Unfortunately, it is not easy to follow this advice. Modern neuroscience It shows unequivocally that the orbitofrontal cortex integrates emotions into the stream of rational thought. Humans are not purely rational beings capable of completely extinguishing their passions. Despite their mistakes, or perhaps thanks to them, the five characters that I have followed and outlined in this book have not only produced innovations within their respective sciences, but also truly important intellectual creations. Unlike many scientific works directed only to the professionals of the discipline itself, the works of these masters have crossed the borders that separate science from general culture. The influence of their ideas has been felt far beyond their immediate significance for biology, geology, physics or chemistry. The History of Animals, book 9, chapter 6. The Nature of Gods, p. 78; 1997. Speciation, Sinauer, Sunderland. The Meaning of Relativity, 5th ed. The Meaning of Relativity, 5th ed. Popular Lectures and Addresses, vol. Astrophysical Journal, 517, p. 565. Astronomical Journal, 116, p. 1009. The classic Castilian translation by Antonio de Zulueta, however, makes it "develop". Urban subculture enemy of the conventional, with a fashion inspired by vintage and second-hand clothes. The species Amphiprion ocellaris. You can download it from Ironically, by means of radiometric techniques, rocks can determine the age of the Earth, a much longer time interval than a watch designed by a watchmaker has ever measured. As you can imagine, there are numerous reprints of The Origin's work. The current developments in evolutionary psychology can be seen as descendants of those pioneering investigations. Undoubtedly, the clamor that arose against Darwinism would have been much less pronounced if evolution did not apply to humans. There are many excellent books on evolution and natural selection, at different levels. The following are some of those that I found useful: Ridley 2004a is a first class textbook. A provocative philosophical approach is offered by Dennett 1995. An excellent review of the history of evolutionary theory is Depew and Weber 1995. Wilson 1992 makes an exhaustive review of biodiversity. Dawkins 1986, 2009, Carroll 2009 and Coyne 2009 are high-quality dissemination books. Pallen 2009 is a concise and very accessible introduction. A fundamental book on the history and origins of the theory of evolution is Gould 2002. Another high-level historical review is Bowler 2009. Resistance to antibiotics and pesticides, which develop in a few years, are examples of microevolution. The origin of mammals from reptiles is an example of macroevolution. Charles Lyell extended the concept that geological changes are the result of the continuous accumulation of minute transformations during immeasurably large periods of time, especially in his book Principles of Geology. Classified as Priscomyzon riniensis. This pillar of Darwinian evolution has been confirmed by many spectacular findings. For example, discoveries of dinosaur fossils with feathers, such as Microraptor gui and Mei long, are consistent with the idea that birds evolved from reptiles. The reader will find an interesting discussion about the tree of life in Dennett 1995. The study that confirmed the Nabokov conjecture is Vila et al. In an impressive study, Meredith et al. 2011 they used twenty-six genes to build the phylogeny of mammalian families and estimate the times of divergence. This term is sometimes used in an abusive way to imply that one can ignore complexities and completely reduce one discipline to another. A good discussion of reductionism in the sense in which I use it here can be found in Weinberg 1992. Darwin 1964, p. 61 A very accessible description of natural selection is found in Mayr 2001. A textbook on the selection is Bell 2008. Endler 1986 presents abundant evidence of natural selection. Malthus argued in his Essay on the Population Principle that humans produce too many descendants and that, consequently, if no restriction is imposed, famines and "premature death will visit the human race in one form and another." The British geneticist Bernard Kettlewell conducted numerous investigations on the birch butterfly and industrial melanism. His findings have been questioned by some but defended by others. An informative summary of debate is found in Roode 2007. Popper 1978; also Miller 1985. There is a huge literature on genetic drift. Other readily available resources online are Kliman et al. A very complete text on population genetics is Hartl and Clark 2006. It is a manifestation in the Amish community of what is known as the "founding effect". When a population is reduced to a very small size because of some environmental or migration changes, the genes of the resulting "founders" of the population have a disproportionate representation. It is also described in Blackburn 1902, part 2. The first to use this expression was Hardin 1959, p. 107. Brownlie and Lloyd Prichard 1963. A grammar school is a secondary school. There is an interesting description of Mendel and his work at Mawer 2006. The description presented here is basically a simplified version of the one presented by Ridley 2004a, pp. Who explains it for the first time is Fisher 1930. A more detailed analysis of Darwin's numerical attempts is found in Parshall 1982. Letter to Wallace from February 2, 1869, in Marchant 1916, vol. Darwin returned to this idea of ​​a latent tendency in a letter he wrote to Wallace on September 23, 1868. See also Bulmer 2004, Morris 1994. The exact date of this letter is unknown, but as it was sent from Moor Park, he had to be from before November 12, 1857. Letter dated "Tuesday, February, 1866." For a discussion of the Vatican's first responses to evolution, see Harrison 2001. The effect was demonstrated by Kruger and Dunning 1999. An accessible description can be found in Chabris and Simons 2010. Ancient Hindus believed that a cycle of destruction and renewal It lasted 4.32 million years. Theophilus of Antioch converted to Christianity as an adult. Ussher calculated that the Creation had taken place in the year 710 of the Julian calendar; Brice 1982. The note was eliminated at the beginning of the 20th century. An English translation appears in Reinhardt and Oldroyd 1982. The reference is to Fontenelle, Entretiens sur la pluralité des mondes. An English translation of Maillet 1748 is Carozzi 1969. Newton 1687; see the English translation of Motte 1848, p. 486. The twentieth volume of the Histoire naturelle, générale et particulière was entitled Époques de la nature. In it, he divided the history of the Earth into seven epochs, and tried to estimate the duration of each of them. A good description can be found in Haber 1959, p. 118. He wrote a series of articles and a book in favor of the biblical description and against Hutton. The quote included here is from Kirwan 1797. There are several detailed biographies of Lord Kelvin. Burchfield 1990 focuses on Kelvin's research on the problem of the age of the Earth. Most expected that honor would be taken by William Thomson. In fact, his tutor, Dr. Cookson, commented that "it would be a great surprise for the university if it were not like that". Thomson himself was not so convinced. In the end, Kelvin, who had more talent but was slower, came second. Kelvin made numerous contributions to thermodynamics. In 1844 he published an article on the "age" of temperature distributions. Basically, showed that a temperature distribution measured in the present can only be the result of a heat distribution that existed at some finite time in the past. In 1848 he conceived the absolute scale of temperature that bears his name. A good description of the development of the theory of thermal conductivity is found in Narasimhan 2010. These types of uncertainty ended up playing an important role in their error. This time scale is known today as the Kelvin-Helmholtz time scale. Shaviv 2009 presents a very detailed but quite accessible exposition of the theory of stellar structure and evolution. Chamberlain 1899 presents a commentary on Kelvin's speech of 1899. On February 27, 1868; Kelvin 1891-1894, vol. The angular velocity of the Earth in its rotation around its axis is greater than the angular velocity of the Moon in its orbit. Kelvin delivered his presidential address entitled "On the Origin of Life" in Edinburgh in August 1871. Burchfield 1990 provides a comprehensive discussion of the influence and impact of Kelvin. The number of attendees was estimated between four hundred and seven hundred. Apparently, Bishop Wilbeforce's comments after the conference lasted half an hour. He concluded by saying that "Mr. Darwin's conclusions were a hypothesis that in the least philosophical way were raised to the dignity of a causal theory." The most detailed study of the event is in Jensen 1988. For example, the Press reported on July 7, "he asked the teacher if you would rather have an ape for his grandfather or grandmother." In other surveys, slightly different lists appear. See, for example, Dalrymple 2001. During his career, Perry introduced new methods of teaching mathematics and worked on problems of applied electricity. Salisbury argued that a hundred million years would not be enough for natural selection to transform jellyfish into humans, and he repeated Kelvin's objection based on the design argument. Also described in Shipley 2001. Perry wrote to Oliver Lodge on October 31, 1894. He added that if natural selection had to be ruled out, the only alternative left was to appeal to some providence, which he considered destructive to scientific reasoning. He also wrote to Kelvin on October 17, 1894, and again on October 22 and 23. The dinner was held on October 28, 1894. Perry wrote to Oliver Lodge on October 29. An excerpt from Tait's letter is included in Perry 1895a. Letter from Perry to Tait of November 26, 1894. This letter is also included in Perry 1895a. Perry also notes that "I found that many of my friends agreed with me." Letter from Tait to Perry of November 27, 1894. Letter from Perry to Tait of November 29, 1894. Later, the second turned out to be wrong. Letter from Kelvin to Perry of December 13, 1894. Included in Perry 1895a, p. 227. Kelvin was especially interested in contrasting Weber's results on conductivity. Perry Kelvin 13 December 1894. Thomson 1895, 1895. Thomson and Murray Kelvin also based their conclusions on measurements of the melting point of diabase, basalt, made by geologist Carl Bakus. The entire debate is described in detail in Shipley 2001 and Burchfield 1990. His letter to the editor only occupied fifteen lines. He speculated that the age estimated by Kelvin could be increased by a factor of ten or twenty. Eve 1939 offers a good biography of Rutherford, with a description of his research. In his letter of August 15, Lodge said Kelvin that "his brilliant and original mind has not always been patiently submitted to the task of assimilating the work of others through the process of reading." The episode is described in Eve 1939, pp. Soddy 1906 reviews the controversy. Holmes 1947 makes an interesting review. They used the disintegration of krypton 81, a very sparse isotope, to successfully date in 2011 the ancient Nubian aquifer that stretches across North Africa. The classic text is Festinger 1957. Ochs 2005 and Dein 2001 offer interesting descriptions and analyzes of the events surrounding the death of Schneerson. Olds and Milner 1954; Olds 1956 is a popular version. There have been numerous studies on positive affective reactions and addictions. Motivated reasoning involves emotional regulation. Studies suggest that motivated reasoning is qualitatively different from the reasoning that occurs when the person does not have a strong emotional involvement in the results. An extensive review of motivated reasoning is found in Kunda 1990. An outreach book is Coleman 1995. 2006 presents the fMRI studies. Stacey 2000 offers a good discussion about the importance of Kelvin's estimate of the age of the Sun. The problem of generating energy in stars is discussed in chapter 8. Darwin inserted this phrase in the sixth edition; Peckham 1959, p. 728. There are quite a few biographies of Pauling. Several books deal very well with various aspects of Pauling's work. Pauling 1935, Pauling and Coryell 1936. 501-502 offers a good description. Hsien Hu had already made some progress in 1931. It is very significant in relation to Pauling's later works that the authors observe that "this chain is folded is a uniquely defined configuration, sustained by hydrogen bonds." The hydrogen bonds would become the Pauling brand. Pauling described his activities at that time in a dictation made in 1982. The original paper used by Pauling on which in 1948 he sketched the structure and then folded it, has never been found. Corey already had considerable experience with X-ray studies of proteins. Many years later, Pauling had the dignity to comment that it might be Corey who had convinced him. Pauling wrote to the chemist and crystallographer Edward Hughes. Pauling admitted in later interviews that he had been worried that the Cavendish group would beat him to check the models. The reader will find good descriptions of the technique and its applications, for example, in McPherson 2003. Blow 2002 offers a brief description with less physics. In 1995, he added that Corey had nothing to do with the discovery. Dunitz, in a conversation with the author on November 23, 2010. Dunitz 1991 contains a concise summary of Pauling's achievements. See also Nye 2001, p. 117. For example, Levene and Bass 1931. 73-96, contains a good description of the first investigations. The letter was written on May 13, 1943. The fact that the article was published in 1944, during the war, may have helped it to have little echo. Oster interpreted Wilkins' delay in posting the images as a lack of interest on his part, although in reality Wilkins was working to confirm the results with better tools. Watson 1980 is especially recommended. It includes, in addition to the original Watson text, an excellent selection of reviews and analyzes. Also highly recommended are Crick 1988 and Wilkins 2003. Unfortunately, Rosalind Franklin did not live long enough to write her autobiography, but there are two biographies that fill the pond very well. Randall wrote to Franklin on December 4, 1950. He added: "I am not in this way suggesting that we should abandon any idea of ​​working on solutions, but it seems to me that research on fibers would have a more immediate, and perhaps fundamental, benefit. » John Randall wrote to Pauling on August 28, 1951. Wilkins and others are busy trying to decipher X-ray photographs of deoxyribonucleic acid. " Pauling responded politely on September 25, 1951, saying he regretted disturbing Randall. Chargaff adds: "I told them everything that I knew. If they already knew about the pairing rules, they kept it quiet ». Crick wrote a draft describing his approach. It also refers explicitly to Pauling's alpha helix model. Franklin had found eight molecules per nucleotide, while Watson reported four molecules per point of the mesh. His general reluctance to assume anything about the structure is reflected in his statement: "No attempt will be made to introduce hypotheses about the details of the structure in the current phase." The complete episode is explained in detail in Hager 1995, pp. The anti-Communist atmosphere prevailing in those times is vividly described, for example, in Coute 1978. Pauling wrote to Harry Truman on February 29, 1952. Interview with the author on November 15, 2010. Judson learned of this meeting by a scientist who that winter he had worked at Caltech. Apparently, Pauling tried to cheer up before the political problems that he had then. The transcription of the web page erroneously interprets "I am in direct manner" when it should say "in an indirect manner". Described in Watson 1980, p. 94. Watson wrote: "Instead of sherry, I allowed Francis to invite me to a whiskey." Wilkins added: "It is not possible that he has carefully examined the details of what they published in that article on base pairing; almost all the details are simply wrong. " In a series of foundational articles, Kahneman and Tversky discuss this topic in depth. See also, for example, Kahneman and Tversky 1973, 1982. An excellent informative exposition is Kahneman 2011. 115-132 offer an exquisite discussion of some aspects of inductive reasoning in relation to error. Lehrer 2009 offers a detailed description of the decision process. 363-374 offers some clarifying examples. The differences appear in the prefrontal cortex, which controls emotions by thinking about them rationally. See, for example, by Martino et al. Pauling repeatedly repeated this comment by Ava Helen. This is an important aspect, since it shows that Pauling related the structure with the capacity to carry information. Pauling and Corey also referred to amino acid sequencing, noting that in terms of the dimensions involved, nucleic acids are "suitable for the ordering of amino acid residues in a protein". The project began in 1988, and the researchers studied a total of 4200 individuals. A collection of the articles describing many of the results is Bäckman and Nyberg 2010. Conversation on April 18, 2011. The precise location of the hydrogen atoms in the bases was not known with certainty. Donohue was an expert on the subject, on which later, in 1952 and 1955, he would publish important articles. In crystallography, symmetry is used to characterize crystals. This, in turn, implied that the chains were antiparallel. Letter from Wilkins to Crick, probably March 23. Watson and Crick 1953b Crick 1988, p. 66. See, for example, Reich et al. The involvement of Gamow and its coding schemes are described in detail, for example, in Judson 1996. The event is described in depth in Mitton 1005, p. The program was announced in the British magazine Radio Times on March 28, 1949. Hoyle added: "Holding a popular opinion is cheap and costs nothing in reputation." There were other chemists who conceived their own versions of the periodic table. A fascinating reading on the periodic table is Kean 2010. For a brief biography of Prout, see Rosenfeld 2003. At that time I still considered that annihilation was also a possible source of energy. Eddington discussed the source of stellar energy in Eddington 1926. See also Shaviv 2009, chapter 4. It is described in Berenstein 1973, p. 192. At very short distances compared to the size of the nucleus, the nuclear force itself becomes repulsive because particles such as protons resist agglomeration. This quantum effect is known as the Pauli exclusion principle. The probability of crossing the barrier created by the Coulomb force increases exponentially as the energy of the particles increases. At the same time, the distribution of the particles at a given temperature is such that at high energies the number of particles decreases exponentially. The product of these two factors results in a peak at which the reaction is most likely to occur. These ideas were originally published in the late 1920s. Gamow had already presented the idea about nucleosynthesis in the big bang in Gamow 1942 and Gamow 1946. Fermi examined the problem with the physicist Anthony Turkevich, although they never published their results. A good description of the work on the problem of the mass hole can be found in Kragh 1996, p. Hoyle delivered his lecture on November 8, 1946. There is an excellent accessible description of Hoyle's work on nucleosynthesis in Mitton 2005, chapter 8. The incident is described in Hoyle 1968b. Salpeter would later make a distinguished career in astrology. After many years, the participants had somewhat different memories of what happened. A good summary of the different versions can be found in Kragh 2010. The article and its significance are also described in Spear 2002. Since life as we know it is based on carbon, much has been discussed about anthropic significance of the resonant level in the carbon. This question falls outside the scope of our discussion. I must point out, however, that in 1989 I and several collaborators demonstrated that even if that energy level had a slightly different value, the stars would still have produced carbon. This conclusion was confirmed later by the detailed investigations of Heinz Oberhummer and colleagues. For a detailed review, see Kragh 2010. Crafoord prize press release. Chown 2001 offers an entertaining and accessible account of the history of the theory of nucleosynthesis. Tyson and Goldsmith 2004 offer a clear, multidisciplinary and humorous journey of cosmic evolution, from cosmology to biology. Quoted in Burbidge 2003, p. 218. In his excellent account of the history of stationary state theory, Kragh 1996 raises questions about the authenticity of the film's success. The fact that this letter was written so early, in 1952, gives it more credibility. Described, for example, in Van den Bergh 1997. A brief summary of the events can be read in Livio 2011. Liliane Moens, to provide me with a copy. Block believed that "§§1-n" of the letter should be read as "§§1-72", by the way in which the symbol "n" is written. He also interpreted that the text said that Lemaître was free to translate only the first seventy-two paragraphs of his article. He further concluded that paragraph seventy-three was precisely the Lemaître equation that determined the value of the Hubble constant. For an excellent dissemination text on the discovery of quasars, the microwave background and its significance, see Rees 1997. Hoyle, Burbidge and Narlikar 2000. Livio 2000 is a review of the book. Interview with the author on March 5, 2012. Interview with the author on July 1, 2011. Interview with the author on September 19, 2011. Hoyle's original argument was against abiogenesis, the theory of the origin of life on Earth, not against the Darwinian theory of evolution. Dawkins expands the discussion of Hoyle's fallacy in Dawkins 2006. Kathryn Schulz offers a fascinating discussion of the feelings aroused by the fact of being wrong in Schulz 2010. He gives a beautiful description of the model in his popularization book, Guth 1997. The relationship between The steady state universe and the inflationary universe is discussed in Barrow 2005. The definitive results were published in Hubble 1929b. Einstein 1917, p. 188 of the English translation. There is an excellent discussion of the modern interpretation of the Mach principle in Greene 2004. There are many excellent popularization books on general and special relativity. Two that I find stimulating are Kaku 2004 and Galison 2003. Reading Einstein 2005 is always rewarding. In the collection of ingenious essays by Tyson 2007, he deals in a very interesting way with several related topics. Einstein himself explained the principles in Einstein 1955. Hawking 2007 presents a collection of Einstein's articles. In the scientific biography of Einstein, Pais 1982 explains the principles with great elegance. Greene 2004 puts the theory in the context of modern developments with accessible language. The Kyoto Conference was pronounced on December 14, 1922. The new generations of watches continuously improve accuracy; for example, Tino et al. Earman 2001 offers an excellent and detailed discussion of Einstein's introduction of the cosmological constant, as well as the early years of its history. There is a clear exposition also in North 1965. Letter from Einstein to Weyl of May 23, 1923. The episode is recounted in Brunauer 1986. Letter written on September 24, 1946. Letter written on July 9, 1948. For example, on August 4, 1948. Gamow was one of the many guests. However, the name of Gamow does not appear on the list of people who accepted the invitation, March 17, 1949. Letter written on September 26, 1947. Letter from Einstein to Lemaître of September 26, 1947. Laloë and Pecker 1990 did not think that Einstein had used that expression either, but the evidence they presented was much weaker. This comparison was also used by Weinberg 2005. Letter dated September 14, 1931. Lemaître's ideas about the formation of galaxies appear, for example, in Lemaître 1931, 1934. There is an excellent description in Guth 1997. Calder and Lahav 2008 discuss how Newton's work alludes to at least some aspects of the effects of "dark energy". Norton 1999 discusses this problem thoroughly. Einstein could have been inspired in part by these works to introduce the cosmological constant. However, several years later, Petrosian showed that the model also predicted a reduction in the brightness of the more distant quasars, against the observations. The equation is now: Gμν = 8πG. Davies 2011 is a short and also accessible article. The theories of time and their relation to cosmic expansion are explained in a fascinating way in Carrol 2010 and Frank 2011. The results were published in Riess et al. Overbye 1998 wrote a wonderful description of the discovery. It is believed that they are the result of white dwarfs that increase their accretion mass to the maximum allowed for a star of this type. At that moment ignition of carbon begins in its nucleus. The white dwarf is destroyed by the explosion. Kane 2000 is a beautiful dissemination text that describes the concepts involved in supersymmetry. Dine 2007 is an excellent technical text. In this presentation I follow closely the discussion presented in Livio and Rees 2005. A classic book about anthropic reasoning is Barrow and Tipler 1986. Vilenkin 2006, Susskind 2006 and Greene 2011 offer wide and accessible discussions about the anthropic concept and the multiverse. Mangel and Samaniego 1984 is an academic analysis of Wald's work on aircraft survival. Wolfowitz 1952 chronicles all of Wald's work. Kepler's model is described in some detail in Livio 2002, p. 142. Well explained in Vilenkin 2006. This "landscape" that would contain a huge number of potential universes is the issue dealt with by Susskind 2006. Weinberg 2005 presents some of Einstein's mistakes. Ohanian 2008 offers an excellent compilation and review of all Einstein's errors. Einstein wrote his last autobiographical notes in March 1955, and finished them with commentaries on quantum mechanics. Kahneman 2011 offers a broad and accessible account of ideas and discoveries about decision making.

This brings us a bit closer to the history of the book, about which excellent research exists. You can access a copy of this page by clicking on the respective link. Let's make a schematic analysis of the process of evolution of writing and its supports. We know, for the enormous amount of tablets that we have left of the Mesopotamian region, that the material used was basically clay, although they also used metal and stone. The papyrus is another of the materials that during a quite extensive period of the human history served as a scriptural support and that constitutes together with the ancient Egyptian writing one of those transcendental creations of the history of the humanity. Sumerian education has been very well studied. Of course, those "school texts" were like the notebooks where the students did their homework. We can not fail to point out that it is also in this Mesopotamian region where we already find a library, the famous library of Asurbanipal, in Nineveh. Always following Contenau we will say that in the time of Asurbanipal, as in our days, there were people with a propensity to build their library to the detriment of those of others; To protect themselves against this most dangerous plague, books were placed under the protection of the gods. The Egyptians, with their writing that had three types, disposed of the papyrus, far superior to the clay tablets of the Mesopotamians. Walker describes the technique of the manufacture of the leaf that was going to be used for writing, using the stems of the papyrus. The size of the leaves was varied. As soon as its width ranged between 50 and 170 cm, the latter being the most frequent measure. The length of the papyri was also very variable. The Harris Papyrus, which is the largest, reaches a length of four meters. The Mycenaean monarchy, centered in the palace, regulates the economic, social and political life and this around the use of writing and the constitution of archives. This writing was done in tablets. It is true that Hellenists today tend to see the Aegean or Minoan and Mycenaean civilizations as "common work of the same people, which is none other than that of the future Greeks." In 1939, the newly discovered linear script B, so named to differentiate it from the pictographic one, allowed access to the first archive of written documents from the continental Mycenaean Greece. "Soon after its decipherment, it was written mainly on papyrus. The word Byblos means "documents" and this is because the Greeks prepared papyrus folios that, once written, were folded horizontally several times and sealed, as we know it was done with letters and documents, as for the Mesopotamian influence between writing Greek, Turner points out that most probably the pen, which displaced the brush, was imported from Mesopotamia, Herodotus represents the first and Thucydides the second, and Plato is one of the representatives of that opposition to the book, Plato had a good collection As we have seen for a long time, it has novofobia and in this specific case the opposition to new media for the ifusión of the thought, for the plasmación of the texts in new supports. A well-studied aspect is that referring to the stage of the orality of the texts. Alberto Manguel has dedicated the chapter "Silent readers" to appreciate the transition to silent reading, which has the advantage of being much faster. It is not totally true, therefore the scriptio continues, that is, the writing that did not have even separation of words was an obstacle to silent reading. The work entitled "Between the volume and the codex The Greeks undeniably, but also the Etruscans, contributed to the progress of knowledge and interest in books in Rome.The still small child was entrusted to the grammarian, who taught him to read and write Then it passed into the hands of the rhetorician who started it in the humanities.It is at this stage of education that children and young people began to receive the influence of pedagogues, generally Greek instructors, of slave status, in charge of taking children to The school and then help them in their school tasks According to G. Cavallo, in a first stage reading and writing were almost an exclusive practice of religious and legal issues, collected in the so-called "lintei books" and in the "tabulae lignatias. He read privately and publicly, sitting or reclining or lying down, reading aloud was common, and specialized readers were sometimes used in such a way that cture became indirect and oral. This implied that it was necessary to be very experienced to individualize the separation of the words and at the same time to grasp the meaning. The books were, at first, quite scarce and very expensive, but in the epoch of the emperors they lowered their prices enough. James Stewart, in his work "The intimate life of the Romans" provides us with very important information about Roman education. It is true that in the first level the book is not used because the learning was basically by memory, although it is known that the reading of Aesop's fables was used. He exercised a lot of reading, preferably using the works of Homer. The teachers could have a copy of a few books and surely many others in their memory. Other texts widely used in the teaching of this level were four works of Hermógenes de Tarso. When the Turks fell on Byzantium, many intellectuals went to Europe collaborating with the European Renaissance, to a degree not yet well studied by the specialists. The European Middle Ages is, in large part, the heir of Greco-Roman culture. Manuscripts continue to be copied in continuous script. Another novelty, according to Malcolm Parkes, was the transition from oral to silent reading. Another important change was the passage from the book-roll to the codex. It has been pointed out an inconvenience, not being able to easily locate a specific passage, in addition to others, but of lesser importance. The Middle Ages knew three types of teaching and study institutions: monastic schools, urban schools and universities. The first two, as E. Jeauneau reminds us, are mentioned in a chapter of 789. Charlemagne, in this chapter, orders that schools be created in each monastery and in each bishopric. Parallel to the movement that leads the trades to join in corporations, the study people are grouped to defend their rights and their privileges and this is the origin of the universities. During the medieval period the book played a very important role. Recall that the two great teaching methods were "the lesson" and "the dispute". The "dispute" was made on a topic that was chosen in advance and suited to a given program or on an improvised theme. This should not make us forget that this education was essentially elitist and typical of a religious elite. The illustrations begin to play a very important role. Binding became not only a technique but especially an art. Technique of great value to the extent that the binding fulfilled a very important role, since, when receiving the book the treatment of a codex had to join the leaves by sewing, as well as protect them from deterioration. The monasteries played a very important role in book preservation and dissemination, although slow because it was based on the patient work of the copyist monks. Soon humanism will also play a very important role. Relatively little time would elapse for a new scriptural support to make its appearance and move in very little time to parchment: paper. Chinese invention that begins with the so-called "tissue paper" in the first century of our era, but whose technical limitation was the residual product of the manufacture of mattresses and silk clothing. However, the Chinese also wrote about a paper called "Baquiao paper" and that it was made using hemp fibers, mixed with a small portion of ramie fibers. But this Baquiao paper was also a residual product: residue from the manufacture of hemp mattresses and garments. The great leap came in the year 105, when Cai Lun achieved a method to make paper using tree bark, hemp, rags and broken fishing nets. Shortly after Zuo Bo would perfect this technique, achieving a finer role. The Arabs would be engaged in the manufacture and export of paper to Europe. The use of paper, so economical and practical, displaced all other support for writing. In the year 1150 the Arabs established paper mills in Spain. This greatly encouraged the development of the economy and culture of the various countries. "In the Encyclopaedia Britannica, a very important piece of information appears: it is said that before the invention of the printing press, the number of manuscript books in Europe could be counted by But in 1500, that is to say, only 50 years after the invention of the printing press, there were already more than 9 million books.The first printed books were called incunabula, from a Latin expression used in 1639 to describe the beginnings of the In the first stage of the printed books, the incunabula imitated the codices, and Svend Dahl, referring to this fact, tells us: "they succeeded in an amazing degree in completely transferring the appearance of the medieval parchment codex to the printed book and producing works that were not they detract in beauty with the illuminated manuscripts. "It is at this stage that the editors convert from humanists to merchants, which allowed To overcome the crisis, a true printing business organization is already emerging, with a very modern normativity for the relations between the writer, the editor and the civil and ecclesiastical powers. Surge, it is true, religious censorship, especially very marked in Spain. The technique of the impression improves remarkably when obtaining a greater quality in the casting of the types, as well as by the quality of the ink. It is also a stage of great flourishing commercial bookseller. Let's see the case of the famous Encyclopedia. This work was a great bookstore business. Daniel Roche in "Does the revolution make the books? When he dies he leaves a patrimony of approximately one million five hundred thousand livres tournesas, when at the time of his marriage, in 1741, he only had 50,000 livres.It is also the time of the book pirates. Lyonnes Duplain is a legendary figure of filibuster printer, specializing in books unpresentable, as pirate editions that allow printers of provinces to compete with those of Paris.Computers or computers are no longer the exclusive domain of computer geniuses and today it is within the reach of any normal average intelligence person, and in some cases, in our own home, a child who is now eight or ten years old will learn to use the Internet as another resource for their education. I am not only referring to texts but to all kinds of multimedia material, the possibility of accessing museums through virtual visits is a valuable source of knowledge and promotion of artistic sensibility. The same happens with some museums of voices of famous people. Or to be able to admire and read the facsimile of works that have been digitized because they need to be safeguarded. And it turns out that in that digitalization the resolution of details is superior to that which the direct observation of the physical manuscript allows. And what we say about this chronicle we could say about the prince edition of Don Quixote, to only mention two of the many works that have already been digitized. And if before readers enjoyed reading that used atoms, today, without abandoning those books, we have opened a wonderful world. There is no justification for being reluctant to enjoy their charms. That is why the new technology is a challenge and not a simple fashion or something that could be considered as an adjective. But to think that education will become digital in the relatively near future is to totally ignore the economic reality that exists in most of the countries of the world, in what is now called the global village. We have already given really shocking data about this reality and it seems to me that leaving them aside leads to an inconsistent analysis or that aims to generalize a concrete experience of a certain socioeconomic group surely high as if it were representative of an entire society. I recommend reading the article "From the missing Link to the digital divide. Do not forget that in our global village for every 100 people 80 live in inadequate homes, 66 have no drinking water, 66 have never made a phone call, 50 are poorly nourished and only 1 has a university education. Do not forget that if in 1820 the difference between the richest country in the world and the poorest country was only 3 to 1, in the year 2000 it was 72 to 1 and the gap tends to grow more and more. With this I want you to understand that you should not elucubrate abstractions because nothing good leads and only masks reality or at least, consciously or unconsciously, you try it. It is not about novophobia or ignorance. I have read that interesting "book" after having "downloaded" it in Word format with Arial font type 12 duly justified and with previous orthographic corrections and printed on A4 paper. Later we will have the opportunity to analyze, although briefly, the challenges of the new hypertext reading and reading that I call global or totalizing. Paradoxically in a global society that counts and will count even more with a diversity of information sources and in the most varied media, what is read is relatively little. And if quality is treated, what is read is dramatically insignificant. I do not know anything about computing. "" Internet? Sufficient problems I already have in class. "They are the basis of a technocratic education that aims at students depending on devices to add two plus two. Neither these nor other prejudices have a justification. The explanation that I find in front of the doubting attitudes or of frank rejection, is the novofobia, of which the philosopher Mario Bunge speaks to us. And nowadays, in a world that changes so rapidly, the one who does not keep learning all his life stagnates, he is left behind. In Argentina we say "the shrimp that falls asleep drags the current", you also say it. It is an obligation of the teacher to be with the latest innovations, to get the most out of his teaching work and also for his own experience, because to the extent that he can feel pleasure with these advances can convey that feeling to their students. In our classrooms, "The computer corner" would appear. Aware that the answers would be obtained with practice, the adventure began. It is inconceivable to consider the possibility of the disappearance of reading. I consider that the stage of oral culture was the first phase within the great cultural development and that it is impossible to conceive of a new stage of pure orality. Let's reflect on the fact that the book is just one of the many supports that writing has had. A support that meant an extraordinary technological advance, comparable only to that Currently, we are living with computers. What is also not new is novofobia. It took a long time for the silent reading to take over, even when the papyrus book appeared and later the codex. And of course the appearance of the printing press made the cry in the sky to those who wanted to continue with the manuscripts. His detractors argued that the reproduction in large quantities of books would lead mankind to perdition, because he was not prepared to read everything that could fall into his hands, without the filter of the custodians of knowledge. Román Mazzilli, in an article dedicated to technophobia, tells us: «What was said about the book at the time of its birth? It was also an object that came to destroy the communion of people who until yesterday just formed rounds to hear the oral narrations and today isolated to establish contact with an object: the book ». Saez tells us that with this the Venetian referred "not only to the danger he saw of writing heterodox immoral texts in writing, but to the fact that the printing press would divulge knowledge among the 'ignorant.'" He thus feared that the printing press would end with the traditional monopoly of a few on written culture. " Novofobia and conservatism characterizes the reaction to the revolutionary invention of the printing press. We read in the aforementioned work in relation to the contempt generated by the printing press for the supposed cultural democratization that the invention was going to generate. The resistance to change was substantive. Does the attachment to manuscripts remind us of current attitudes of those who have a closed attitude of opposition to electronic changes in terms of book support? In the aristocratic houses, the warnings and advice that the nobles composed for their children preserved a manuscript form that, at the same time, protected their secret or privacy and allowed the incorporation of corrections or additions. But beyond the nobility, the reading of manuscript texts was maintained throughout the first modern age. "However, this is not all that is tragic with what is usually painted, what is new is that cyberspace is a liberating means in the As the interpersonal bond is not face to face, the person becomes invisible and is only expressed in words, which is why email and chat constitute the most important aspects for almost all internet users. telephone does not allow time for proper reflection, which is obtained in relationships and communications via e-mail, which has a deferred structure in communication.Another aspect that attracts young people are the games increasingly in greater quantity and sophisticated, which can be accessed via the internet, of course not, since something similar happened with books, magazines and newspapers, and it is undeniable that much of what dita y lee is literary material of scarce cultural value. More is published and read based on the simple and trivial distraction. But we must be understanding and understand that the main interest of the highest percentage of human beings is eminently hedonic, and that this is satisfied through readings that only distract, at best. The downside of this trend is that each time you read less and each time you deteriorate or devalue the content of the publications. We can clearly understand that the internet does not endanger the book, because we have already pointed out that the core is writing and reading. Whether the book is papyrus, parchment, paper or electronic is the accidental. The essential thing is the writing and it demands, necessarily, a support, unless we intend to return to the stage of orality. And how will the various socio-cultural areas of the planet behave in this regard? The opposite side of this medal, as Petrucci points out, is Japan. It is a country with generalized reading due to the prestige of the writing and because the Japanese considers it a duty to be informed and trained by written culture and where, in addition, the prestige of schools and universities is beyond any doubt. Thanks to this technology we know that Professor Simone dedicated his paper to review the four changes that have brought "the dissolution of a paradigm of culture, information and culture." On the other hand Savater put as an example of the dangerous influence that new technologies have on education the disappearance of the spelling and the syntax that characterizes many emails. This last aspect is very important from the pedagogical point of view and we mention it later. Sartori specifies well the relationship of television with education. But in the sphere of what interests me, that is, the Paidéia, the formation of knowledge, depends on the user. If I give a wonderful machine to a real illiterate, he will not know how to use it. It is the ability of the user that makes the use of the Internet positive. If the video-child enters the world of Internet, the same problem is repeated. That is, you can search only games or go to know what things. Every machine needs a context of valorization on the part of the one who gives it. If we put chimpanzees on the Internet, they can play with chimpanzee levels, but no more. "A very important aspect in relation to new technologies is that concerning the new way of reading, with the book or paper magazine the reading comes determined by the author Reading a book from beginning to end means a linearity, which is predetermined by the structure given by the author of the book, the hypertext, defined as "Form of organization of texts and information. Instead of reading a text continuously, certain terms will be linked to others through relationships between them. "The hypertextual reading is the correlate of hypertextual writing, hypertext generates a multilinear and multisequential reading. Full text when an appointment is made of a text that is found on the web It is possible, through the bibliography cited by the author, to receive guidance on where to go to deepen the topics covered, but it is impossible to interconnect texts. There are currently, for example, many historical documents that have been digitized and that are in the network and that via hyperlinks we can access them by just clicking with our mouse The most important thing is to read them in a multidimensional and multidirectional way. that the hypertext allows the reader to access, as he / she is reading a specific text, subjects that they are mentioned and about which the reader would like to receive certain information at the moment, hence the non-linear nature of the hypertext reading. It is up to the reader to evaluate the quality of the information they access, but there is the great advantage that these hyperlinks usually lead to other hyperlinks and therefore you can skip texts and discriminate the quality of them. Ultimately who has to decide on what is useful or not is the reader and that is why it requires a great analytical and critical spirit. For example, if we are consulting a certain encyclopedia we find a very wide hyperspace. Hyperspace is the term that describes the total number of locations and all their interconnections in a hypermedial environment. "In the case of Encarta Encyclopedia, for example, in either case is a window that allows us to return to the cover of the encyclopedia and to be able to immediately search for information in it, hypertext has really revolutionized reading, undeniably the advantages of an electronic format text are fantastic, but its use has to be properly dosed and with clear and precise objectives. Certain tools must be put at your disposal to help access the information contained in the hyperstructures. "Hypertext reading requires maximizing criteria of clear and precise goals and objectives when you are investigating, because you run the risk of wasting time entering We can sometimes find pleasure in an apparently chaotic reading, but in the end it is not because it will always be a function of our interests. Hyperspace and hypertext subjugate and provide unsuspected pleasure P. Halaban points out that the mediating elements allow the individual to act and transform the environment.The tools will do so in the physical medium, the signs, as symbolic mediators, on the same subject. a challenge but that in the end is the great challenge of every teacher to encourage reading, to be able to create the habit of The challenge is thus gigantic and it is usually presented as overwhelming in that the gap between poor, backward countries and rich, developed countries, tend to grow. They are "read" by an optical reader. CDs are those used to store audio. In a very short time we have become familiar with these electronic books, which has the advantage, among other things, of compressing thousands of pages into one or a few discs. What a relief to be able to now have an entire encyclopedia of 30 or more volumes, easily located in spaces of a few centimeters. Another great advantage of optical discs is the ease and practicality of their use. However for a simple query I consider that it is more practical the two volumes of both the twenty-first and twenty-second edition. Also another great advantage of these books in magnetic support is to interact with the book. In this journal one can make notes based on the degree of knowledge that is possessed and that the images that are seen suggest it, so that later it can be exploited or as a simple playful pastime. The CD also brings very interesting games for children and young people. The way of learning is really extraordinary. We will see a little more ahead, between its disadvantages of the books in optical discs the problem of its reading, as far as it talks about what there is of text, thing that also is valid for the texts in Internet. But before this we must review other great advantages of books on optical discs. For the convenience of the users, the French collection of the 3 CDs comes in an elegant case that allows it to be placed next to the books. The latter counts on each CD with very attractive educational games. It also has valuable links to the Internet. We mentioned that one of the drawbacks of electronic books is the one referring to their reading on the monitor or computer screen, which tends to produce eye fatigue very quickly. In addition, the authors of the work consign their e-mail, which facilitates contact with them, which is useful if one wants to know more about the topic developed or to send comments or concerns. It presents the thematic content part similar to an open book showing the two pages with their respective correlative numbering, making it possible to move forward and backward using the mouse. The text presents its respective interactive index and already within the text hyperlinks to texts and images are used. There are really two different approaches to this relative problem of reading on the screen. One uses the voice to facilitate reading, that is, changes the view by the ear to avoid visual fatigue caused by reading the computer screen. The other approach seeks to improve the image that the user sees so that the activity of reading the text is comfortable and pleasant thanks to the quality of the image that is displayed. These two approaches are not exclusive and can be used in a complementary way, although there are still no products that combine them. He can not conceive of a world in which all books fit into a single volume, nor could he renounce the individuality of the work, nor would he accept the physical inexistence of The Quartet of Alexandria. But the practical advantages of the e-book are so overwhelming, that there will be no cultural metaphor that resists. And, the future, hey, there is no one to stop it ». This invention makes it possible to manufacture books that are the same as always, with the same touch, weight and smell, but which have the qualities of a computer screen. Likewise, they can take the form of newspapers that will materialize in a rechargeable flat screen that will avoid the use of paper. Each new day we will have access from them to the news that the editors put into circulation ». These books are called e-books and began to be commercialized at the end of 1998. Their most common visualization is the computer monitor screen, either on desktop or laptop. In fact, analysts expect the market for eBooks and other electronic documents to reach 70 billion dollars in the coming years. Currently this library exceeds two thousand titles. It was a surprise for him to know that some of his collaborators were not 18-year-olds, as he supposed, but mature teachers. A very interesting aspect, from the pedagogical point of view, is that concerning Hart's thinking in relation to books and reading. Hart points out that the Gutenberg Project does not really involve high technology. That what he does is make available to potential readers, completely free, a library completely at hand. However, he is optimistic about the benefits of his project: "In any case, I think there will be many people who will read and use our books." I believe that electronic books are very useful for students. most of the time classic Anglo-Saxons or products of Western culture translated into English, what often fails is financing. " Scanning a page costs between $ 1 and $ 4, but to digitize entire collections you need millions. But for this they also need a computer, an Internet connection and money. No, as long as two types of services are developed. On the one hand, creating mirror sites in cyber-libraries in poor countries. But he has a hard time doing the same with literature in French. It is also good to highlight the Avizora Library that has an excellent catalog of books in Spanish and in other languages ​​to read or download, ordered by titles and alphabetically. We also find that some publishers digitize works or some chapters of works from their editorial background. The number of works dedicated to the history of Mexico is quite important and very significant works. It was the Dyna Book a transparent screen like a crystal and exempt of keyboard. I also had audio possibilities. One of his greatest achievements was touch interaction with the screen. In 1986 Franklin Electronic publishers added an electronic dictionary on a handheld device, producing the first electronic book. The most have mimicked the computer model in their early experiences to allow virtually the same virtualities that the computer has. However, as such presentations will constitute the protohistory of the new support, we will see it although very schematically. The information is included in the softbook through flash cards that allow storage of up to almost 100,000 pages. It consists of this electronic book on a handy screen with one hand, and it can contain about 4,000 pages, that is about 10 normal novels. The design and conception of the Everybook is radically different from that of other electronic books. founded by Daniel E. Munyan in 1995. This model supposes a jump between the computational models and the same book itself. It can be used by students, professionals and by the general public. Just press one of them and in a few seconds in its more than 200 pages the text is printed. The correlative movement of activated spheres will produce another page of text, and so on. On the other hand, the storage capacity of the system is very large, since it can be loaded from a computer, a card or a high-density optical disk. It also has the virtuality of interacting the content of the text with moving images, or offering independent shorts. But it will be time that discriminates between the successive models. It is not about expressing himself in more or less perfected modes of computers. What this system seeks is to get rid of any circular support, with a normal spiral reading and with an optical head; to move to the sequential sweeping model of a rectangular and reading support similar to the one our eye produces on traditional paper. In this way, likewise, this system pretends to get rid of cellulose forever; However, this must depend on economic aspects that are often not taken into account when analyzing the evolution that is already in the process. It is a work that teachers should take advantage of with our students, especially at the secondary level and also at the higher level, as an introduction to specific topics that they wish to investigate. I believe that a new version of the CDR should be planned to overcome some limitations of the current version and include technical aspects that digitization allows in such support. In fact the excellent quality of the work warrants it and thus its readers can enjoy new aspects that can be considered in what refers to aspects of what is properly multimedia resources. Currently we have several works that are presented both in the form of atoms and bits. The pre-digitalisation process that is now used for the publication of the books is surely used. I think we can be sure that the end of the book will not happen, much less reading, even though some Francis Fukuyama of this field believe it. The technology will not stop and therefore new supports of writing have to appear, always to delight, we hope so, of those who love reading. It also has, in general, and as a support, a series of immeasurable advantages. In short, it has endowed science and culture with potential and dynamics, even at a distance, that paper could never dream or offer ». Umberto Eco's so lucid mind makes us notice the postures, for us most of the time laughable, that confront the "digital missionaries" with the "traditional bibliophiles". It really is deplorable poverty of knowledge and concepts so elementary in people who act at higher educational levels. As some specialists point out, we have gone from an industrial society to a knowledge or information society. Pretending to oppose technological progress is totally meaningless. Today we know that digital is around the world and reaches everywhere. If there are still informative signals that are sent through the analog system, it is only a lag of the past and will soon be completely replaced by digitization. My knowledge of history allows me to appreciate the great significance of the translators and of the famous schools of translators that have played an extraordinary role throughout history. How not to remember the role of Arabic translators in the Iberian Peninsula. In this sense, computer science is a cultural support traditional. It denies the widespread idea of ​​an audiovisual society. Actually, writing is a prodigious vector of information. I believe that those who think and write this do not reflect adequately. I can not imagine how they could express their thoughts, basically abstract concepts with a language based solely on the audiovisual. Of course they could reply that using the audio we could know their ideas, their nuclear abstract conceptions. The oral is temporary par excellence, it is ephemeral. It also demands an extraordinary development of memory, of memorization. The listening public depended on the aeda or rapsoda, the minstrel or the troubadour. I did not have the chance to return at the time he wanted to what he heard and that he loved. Already in the article I can choose between reading the text of the article or listening to it - full text or selected fragment - but it will depend on my choice to decide. I think reading really brings many more advantages, first because reading is private, silent, very fast. It is easier to stop and return to the same text, without having to use your hands to stop the reproduction of the sound and start playback again. However, those of us who are really lovers of culture and of reading make the most of, or so we pretend, all technological innovations at the service of culture. These capacities, as Professor Thierry Leterre reminds us, are precisely those that are learned or should be learned in schools. But even for those who stepped on schools and universities, the truth is that what is known is thanks to a self-learning. It happens that in the beginnings of the artisanal manufacture of paper waste materials were used, such as cotton and linen rags, with a high content of cellulose, which did not require the use of harmful additives. In addition, these papers were of very high quality in terms of durability, because many of them already have several centuries of existence. The introduction of alum, as well as chlorine as a bleach. Chlorine was one of the worst enemies of paper if the waste was not completely removed during the treatment for the bleaching of the colored rags. All this was aggravated after 1850 when the sulfur was combined with rosin - a resin obtained from the pines - to precipitate this material in the fibers. The pulp obtained by this method retains all the components of the wood, including the harmful lignin, although some water soluble compounds are removed during the crushing process. It is sad to know that a large part of the paper manufactured today is estimated to have a maximum half-life of fifty years. The challenge, and this is the value of the research and its consequences, was to develop a chemically treated paste that was equal to or superior to high quality paper. The research, done in conjunction with industrialists, showed that this was possible, without the use of harmful additives and even with the introduction of carbonate fillers to neutralize any acid residue. The technical solution had been found, the stumbling block would now be the economic problem, because the new paper was more expensive and the producers were not sure of its profitability. The pressure on the governments so that these in turn put pressure on the production companies would run and is in charge of the archives and libraries of the world. In the United States, one of the pioneer countries in the search for a better quality paper, both for the interest of professionals related in one way or another with books and for the economic power to carry out its development and implementation, was a long and difficult road to cross. However, in the month of October 1990, a political resolution on the need to use permanent paper was converted into a Public Law that regulated the rules to be followed in manufacturing and in state use for two types of paper. The papers thus designated are those with a pH value of not less than 7.5 and with a minimum reserve of calcium carbonate of 2% plus certain physical requirements such as a certain resistance to bending, tearing and maintaining its color. The alkaline paper does not have mechanical wood pulp, with a minimum pH value of 7 and an alkaline reserve of 2% or more. Generic paper: Paper without a specific pH value and without alkaline reserve. The duration of generic paper varies and is uncertain but many will range between 50 and 100 years. Permanent paper: Paper that will last hundreds of years without significant deterioration under normal conditions of use and storage. In Australia, since 1991 and after a long struggle by conservatives, archivists and librarians to make permanent paper, the battle was won. The electronic book, being a product that does not affect the environment, also means a great advance in this important field of the interrelation of man with his environment. and in a large proportion then printed, of course missing the hypertext in the printed format but knowing that we can return to that intertextuality when we want because we have stored it on our disks or hard drive. The "Je clique, donc je pense" presented in the manner of Descartes' "cogito ergo sum" reflects the click generation. It is true that clicking will make the jump and start the "navigation" as cybernauts, but who has decided to leave the encyclopedia and go into cyberspace is us and we know where we are going. This is, undeniably, a very large qualitative difference between zapping and clicking. We are waiting for a school and an educational system that has to overcome the serious problems and damages caused by the zapping school. But also the traditional book will be influenced by computer science and will begin to use, with the huge limitations that are easy to understand, Internet resources and electronic books. And in truth, this system achieves its objective of making us jump from one to another or other words to compare and reinforce knowledge. But returning to the rich countries, the library that stores its material electronically becomes a reality there. This new library, we signal J.M. The incineration of bibliographical materials little or nothing read and that only occupy space is already a reality in the United States. As Oviedo reminds us, Fahrenheit 451 fiction comes true. Oviedo concludes his article telling us: "The computer age has opened wonderful horizons for us, but it has closed others, it is making us forget that we not only read to be informed, but for pure pleasure, where there are no rules or quantifiable data". It is undeniable that the library, like the book, is assured of its existence. Undeniably will suffer the changes that technology imposes. Already a library is not only a repository of books, magazines and newspapers. Also the excellent cederrón awaken similar possessory and amatory feeling on the part of lovers of reading. A video library, for example, is capable of giving us not only excellent information but an unimaginable pleasure; It can be learned in a pleasant way. Here we must also put aside the novophobia. And, on the other hand, we must bear in mind that rich countries are only 1 6 of the world's population and that therefore their supposed or real problems are not of all mankind. Apart from that, we consider that the trend towards the electronic library should not eliminate, at least in the short and medium term future, the presence of the book, the magazine and the physical newspapers. When this happens we will only have evolved from one support to another writing. The reopening of the celebrated Library of Alexandria is a sample of the current library. In its 45,000 square meters will also operate three museums and four art galleries. The project cost 220 million dollars, of which 120 were contributed by the Egyptian State and the rest by donations. Referred precisely to the future of the book, of the libraries and in general the culture based on the book. Faculty of Philosophy and Letters. The future of reading in the electronic age "-Good, Gustavo, didactic proposals for non-computerized teachers" - Dahl, Svend. A life without values ​​"-Sáez, Carlos The invention of silent reading Considers the" world of the text "as a world of objects, forms and rites whose conventions and dispositions serve as support and force the construction of meaning. In addition, he considers that the "reader's world" is made up of "communities of interpretation", to which the singular readers belong, which is why, throughout this book, a double attention will be paid: to the materiality of the texts and to the practice of its readers McKenzie with great acuteness the double set of variations -the forms of the written and the identity of the public- that has to be taken into account any history desiring to restore the moving and plural meaning of the texts In fact, the history of the book has been given as an object of measurement of the unequal presence of the book in the different groups that make up a society. cultural adductions of social differences. That trajectory has accumulated knowledge without which other inquiries would have been unthinkable, and this book, impossible. However, it is not enough to write a history of reading practices. First of all, it implicitly posits that the great cultural differences are necessarily organized according to a previous social breakdown. Because of this, it relates the differences in the practices of certain social oppositions constructed a priori, either at the scale of macroscopic contrasts, or at the scale of minor differentiations. And the truth is that social differentiations are not hierarchized according to a single grid of social breakdown, which supposedly governs both the unequal presence of objects and the diversity of practices. The perspective must be inverted and the circles or communities that share the same relationship with the written thing must be located. For each one of the «communities of interpretation» thus identified, the relationship with the writing is carried out through techniques, gestures and ways of being. Reading is not just an abstract intellectual operation: it is a test of the body, the inscription in space, the relationship with oneself or with others. A history of reading does not have to be confined solely to the genealogy of our contemporary way of reading, in silence and with our eyes. It also implies, and perhaps above all, the task of recovering the forgotten gestures, the disappeared habits. The challenge is considerable, since it reveals not only the distant rarity of formerly common practices, but also the first and specific statute of texts that were composed for readings that are no longer those of their readers today. When this reading is directed to the ear as much as to the eye, the text plays with forms and formulas apt to submit the written word to the exigencies of oral "brilliance". The choir intervenes to sing his restlessness, until Terus interrupts him, exclaiming: «Oh! At the request of the choir, it will later reveal the content of the tablet, not reading it aloud, but summarizing its content. I had read it clearly in silence, during the choir singing. He then offers a summary of the oracle. He does not read it: he has done it already, in silence. We have already seen that the voicing of the written was programmed, negatively, by the absence of intervals. And if that sound was a value in itself, why would I feel the need to abandon the continuous scriptio, technical obstacle to the development of silent reading? Because the absence of intervals was an obstacle, and it remained so. Because, as we have just seen, the Greeks seem to have been able to read silently, even while keeping the script continuing. Technique reserved for a minority, of course, but an important minority in which were, of course, the dramatic poets. The introduction of the interval was not enough to generalize the silent reading in the Middle Ages. The demands of scholastic science were precise so that the advantages of silent reading-quickness, intelligibility-were discovered and exploited on a large scale. Indeed, it was in the heart of scholastic science that he was able to "coalesce" the silent reading, although it remained practically unknown in the rest of medieval society. Extensive reading seems to be the result of a qualitative innovation in the attitude towards what is written. Fruit of a mental context, new and powerful, capable of restructuring the traditional reading categories. It can be, however, through the experience of the theater. We are still far from the real books and reading practices, but the time of Cato marks a moment of development. In 181 a.C. The so-called "Numa books" were found, papyrus rolls wrapped in cedar leaves. In this way, the imported Greek books represented the model for the Latin book that was about to be born. In other cases it was sufficient to have a certain level of literacy: in particular, the reading of manifestos, documents or messages was made easier by the repetition of certain formulas. He took the roll in his right hand and unrolled with his left, which held the part already read; when the reading was finished, the scroll was wrapped around him on the left. These phases, as well as some gestures and complementary moments, are amply demonstrated in figurative representations, especially in funerary monuments. Some sources, both iconographic and literary, also demonstrate the use of a wooden stand that held the scroll while reading and that is resting on the lap of the seated reader, or mounted on a small stand. In the case of the illustrated scrolls, the reader's eyes could "read" a sequence of images almost simultaneously, completing with the mind the temporal or spatial distances between the scenes represented. But the iconographic descriptions also show the situations of reading. From literary sources it is known that it was also read when going hunting, while the piece was expected to fall on the net or during the night to overcome the tedium of insomnia. The conditions to learn to read are different according to the times, social status and circumstances. In general, learning occurred in the family environment or with private teachers or in the public school. The phases and levels of training were varied and probably proceeded with different body letters, starting from the largest. The ability to read could stop at the minimum indispensable, or achieve a complete learning with teachers of grammar and rhetoric, reaching very advanced levels, to a perfect domain. But even before learning to read, you learned to write. When the reading was already safe and unwrapped, the look was faster than the voice. It was a visual and vocal reading at the same time. The complimentary expression of Petronius librum ab oculo legit referring to a slave-reader alludes to this ability of the expert eye in deciphering the writing immediately, but the question remains if it was a visual only reading or was also vocal. The most usual way of reading was in a loud voice, whatever the level or the objective, so Quintilian himself tells us and by different testimonies. The reading could be direct or also made by a reader who stood between the book and who was listening, either individual or audience. In the case of certain poetic compositions, several reading voices alternated, according to the structure of the text. These practices also explain the close interaction between literary writing and reading. That is why it required, especially in the case of readings for an audience, an expressive reading, modulated by tones and voice cadences appropriate to the specific character of the text and its formal movements. It is not by chance that the term that indicates the reading of poetry is often singing and singing, since it is the voice that interprets. In short, to read a literary text was practically to execute a musical score. The fact of reading in depth to a complex author meant not to stop at the "skin", but to reach the "blood" and the "marrow" of the verbal expression. The voice and the gesture gave the reading the character of a performance. The expressive reading in turn conditioned literary writing, which, being intended to be read aloud, required the practice and style of orality. Thus, the borders between the book and the word are very blurred. Thus, the voice became part of the written text in each phase of its journey, from the sender to the recipient. Anyway, there were differences of loudness in the reading aloud, according to the occasions and the text typologies. Leaving aside the case of expert or professional readers, reading was a slow operation. But there were also other factors that made reading difficult. The writing was quite confusing, since, as it was continued, it prevented a view that was not sufficiently experienced to immediately identify the separation of the words and grasp the meaning. There was also an advantage in the use of continuous script. These recitations took place in public spaces: auditing, stations, theatra. Thanks to these "rites", the participation in the "launching" of the books and in the circulation of certain works included a more varied public and not only that of the authentic readers. Augusto himself had readers at his service. And more generally we must believe that this fact was normally put into practice by those who were able to read for themselves. Likewise, it is a fact that private reading made by a reader during a festive meeting is demonstrated; and there are also cases of "reading essays" that the author of some writing offered to a few close friends. Rather less frequent was the silent reading, but it was not at all unusual. We must believe that it was practiced by individuals who were following the reading that was done out loud. There was also reading in a low voice; this also corresponded not so much to the level of reading, but to factors of another order, related to the situations of reading or the nature of the text. But there were other readings, which responded to the demands of a stratified audience, as was the one that was individualized in the first centuries of the Empire. The emendatio - a process that arises as a consequence of the transmission of manuscripts - required the reader to correct the text on the copy, so that he sometimes felt tempted to "improve" it. The iudicium was the process of assessing the aesthetic qualities or the moral or philosophical virtues of the text. The reader had also inherited from Late Antiquity a corpus of grammatical knowledge that served more to facilitate the process of reading than to awaken interest in the language itself. These grammars presented and analyzed the paradigms of associated forms and superficial syntactic relationships between words in the construction of sentences. In this way the grammars were of great help to the reader, facilitating the analysis of the text and the identification of the elements of the language Latina, which provides a large amount of morphological information through themes and push-ups. This help was invaluable during the first years of this period, when the manuscripts were still copied in continuous script, that is, without separation of words or indication of pauses within the paragraphs. Christian teachers and writers applied this tradition of grammatical teaching to the interpretation of the Scriptures and, as a consequence, religious and literary education were intimately linked at all levels. This situation was different from that which occurred in pagan antiquity, where the highest cultural circles were reserved for a social elite. In this new situation, all literate Christians were urged to read, but "those who aspired to be called monks could not be allowed to remain in ignorance of the letters." As Dhuoda would later point out, in a written treatise for his son, by reading books one learns to know God. The stimulus for reading then became the salvation of the soul, and this powerful incentive was reflected in the texts that were read. The elementary reading book, and the catón of the children, happened to be the psaltery. Grammatical studies and other texts were subordinated to this purpose, and were used to perfect the knowledge of Latinity. San Isidoro observed that "the teachings of grammarians can even be profitable for our lives, provided they know how to use them for good purposes". In antiquity, the oral expression of the text was insisted on - reading aloud articulating correctly the meaning and the rhythms - which reflected the ideal of the predominant speaker in the ancient culture. The silent reading was intended to study the text beforehand in order to understand it properly. The ancient art of reading aloud survived in the liturgy. The beginner should also read aloud so that the teacher could advise him. After the elementary stage, the fluency in reading and in the use of Latin could be stimulated and supervised by reading aloud in a group. The interest in these texts rather than enthusiasm for drama as a literary form in itself, was a way of acquiring fluency in the use of the language of spiritual life. Reading aloud, or at least sotto voce, was also practiced during monastic lectio for the reader to exercise the auditory and muscular memory of words as a basis for meditatio. The term used in the various Rules for this type of reading was meditari liters or meditari psalmos. However, from the sixth century we observed that we began to give more importance to silent reading. Since this type of reading should be supervised to ensure that the reader does not relax or become distracted, it follows that silent reading was not uncommon in those circumstances. Although San Isidoro had established the requirements for reading aloud in the church, he also considered preparing for the office of lector as an initial stage of ecclesiastical education. In this way one could read without physical effort, and when reflecting on the things that had been read, they fell out of memory less easily. Until the activity of producing texts through writing lasts, there will still be the activity of reading them, at least in some proportion of the world's population. On the other hand, it does not seem that serious doubts can arise about the continuity in a more or less close future of the production of writing by the cultural classes of human society. We do not see in what way or why this essential activity for the development of important bureaucratic, informative and productive functions, could or should cease to exist. So this is not the question that may interest the storyteller-prophet or the behavior analyst socio-cultural mass And how will the various socio-cultural areas of the planet behave in this regard? Apart from these extreme cases, a problem of widespread illiteracy is present in almost all African countries, in large part of Latin Americans and in many Asian countries. In addition, in many of the so-called developed countries, there are high percentages of return illiteracy and primary illiteracy from abroad, especially in large urban areas. b) The causes of the permanence of illiteracy in large areas of the world depend not only on the low economic level, but also on political and ideological reasons. This painting is destined to change in the future, but not in a radical way, nor excessively fast. The areas in which the circulation of written texts is smaller or smaller with those not only economically weak, but also where demographic pressure is stronger and women are kept out of the educational process. And yet, appearances are disproved by recurrent symptoms of destabilization and by continuous alarms of crisis that concern both the publishing house and the reading. And indeed, in both sectors the contradictions seem obvious, the uncertainties of the program are great and the demands of state interventionism are depressing. Also in this case to understand is necessary to analyze and distinguish. Japan is a separate case. Once again, in the United States, the struggle against mass urban illiteracy has been raised on a program of reinforcement and social dissemination of the reading of books. This opinion is obviously paradoxical and can not respond to reality, although it is shared by other authorized witnesses with whom I have had occasion to speak on this issue. In Europe the book is not yet fully treated as a commodity, and above all cultural operators and small publishers are opposed to it becoming completely. In this sense, the controversy that arose in France regarding the liberalization of the price of books was logical. The Japanese reader reads abundantly because he has a very high cultural level and because he considers it a duty to be informed and trained by the written culture, in a country in which the prestige of the school and the university are beyond discussion. The most successful sectors are manuals, entertainment and information literature and comics; the prices also with very low. Overall, it is a phenomenon of mass reading, with characteristics of induced consumption, probably unique because of the authoritarian and hierarchical nature of Japanese society and therefore it is not easily exportable to any other place. The European and Japanese situation are, from this point of view, similar to the US, although they do not present the same characteristics. The new reading practices of the new readers must coexist with this authentic revolution of the cultural behavior of the masses and they can not help but be influenced. From a habit of these characteristics are born in the unscheduled disorder of the video new individual shows made with inhomogeneous fragments that overlap each other. Zapping is an absolutely new individual instrument of consumption and audiovisual creation. In the millenary history of reading, rigid, professional and organized book use practices have always been opposed with free, independent and unregulated practices. Over eighty respondents, only some want to read outdoors; twelve of them indicate that they prefer to read sitting at a table or a desk; and four also indicate the library as a place of reading. The conventionalism and traditionalism of the reading habits of the interviewees of this research come from both the high degree of culture, as well as social class, age and the fact that these are cultured Europeans. In this sense, it is not by chance that the only young girl in the group under twenty years of age who had only primary education has shown preferences and habits clearly opposed to those of others, and among the ways of reading has also indicated that of spread on the floor on a carpet. It has already been pointed out that young people under twenty years of age potentially represent an audience that rejects any kind of canon and prefers to choose anarchically. Actually, they also reject the rules of behavior that every canon includes. As recently written, "young people say they read everything, always and everywhere. And this is evident in libraries, which is even more important for the European observer, because it means that the traditional model no longer has validity even in the place of its consecration, which in other times was triumphant. For they rarely support the open book, but rather tend to use these supports as support for the body, legs and arms, with an infinite repertoire of different interpretations of the physical situations of reading. Thus, the new modus legendi also includes a physical relationship with the intense and direct book, much more than in traditional ways. The new way of reading influences the social role and the presentation of the book in contemporary society, contributing to modify it with respect to the nearest past, as is easy to see if we examine the conservation modalities. However, currently the book in a house coexists with a number of different objects of information and electronic training and with abundant technological or purely symbolic gadgets that decorate the youthful environments and that characterize their way of life. Among these objects the book is the least expensive, the most manipulable and the one that can deteriorate the most. The modalities of its conservation are closely related to those of its use: if these are casual, original and free, the book will lack an established place and a secure placement. Until the books are conserved, they will be among the other objects and with the other elements of a very varied type of furniture and follow their same fate that is, to a large extent, inexorably ephemeral. The educational-social action of assisted freedom. Stages of... The indigenous languages. The official languages ​​of the Common Market. The other languages ​​are the result of immigration. The con... Statute of the Spanish language in Latin America. Remember that to see the work in its original full version, you can download it from the top menu. All documents available on this site express the views of their respective authors and not The objective of is to make knowledge available to the whole community. It is under the responsibility of each reader the eventual use that is given to this information. Likewise, the citation of the author of the content and as sources of information is mandatory.

The sleepless traveler page None Enter your email address to subscribe to this blog, and receive notifications of new messages by mail. It is precisely because of the latter that it is impossible to talk about the poetry of Martha Rivera without taking into account the discursive bases that support not only her "position", but also and especially her ideological positioning in an era and in a society such as the Dominican Republic. the eighties. One does not have to be "Foucauldian" to understand that the emergence of a new "discursive formation" always corresponds to a crisis of meaning on the plane of the real-political. In the Dominican case, in the 1980s there was a triple rupture that catalyzed the emergence of a training feminist-feminist discourse in various socio-cultural fields. Independently or not of its ascription to the transcendental aesthetics of Peace, what the attentive reader captures first of all is the attitude of Martha's permanent receptivity to the different manifestations of poetry in everyday life. I do not mean in any way that Martha's poetry is "surrealist", because neither his "style" nor the different mechanisms of image generation he uses are the result of a formal adaptation of the clichés proposed by surrealism. A mechanism of "irrationalization" of the real is thus manifested in his poems, unfailingly determining his most characteristic positioning, which proclaims the hegemony of personal perception over the data of everyday life. From this point of view, it is hardly surprising that the speech acts that predominate in most of the Transparency poems in my mirror are declarative. Not even in texts like the one titled "You", where the declarative intentionality fuses with another of a narrative type, the narcissistic bias that seems to found the poetic plane in Martha's poetry is abandoned. Freed in its contingent form, the poetic battle between the Self and the world can not be won without allies. It is well known that the absence of criticism is the best fertilizer for the terrain where that type of literary and writingsisation characterizes the contemporary period. To change that classification, to displace the word, is to make a revolution ". History is, in effect, the plane where discourses are opposed to each other, and it is only from this opposition that it is viable to establish the meaning of a writing practice. I do not know if, to date, someone has been interested in investigating what the poets of the 80s contributed to the theory of love. Such research, however, would present more than one aspect of interest, for having coincided with the emergence of that group of urban poets who later became known as the "generation of the 80s" with the series of cultural and socioeconomic processes that mentioned more above. It is mainly due to the incipient nature of this writing practice that its historicity is all the more crucial since, before it, Dominican feminine poetry was characterized by its more or less mimetic functioning with respect to the masculine writing of love. Also in this case, that "here" can not be anything other than the poem. The ideologema of "transparent" writing is of symbolist stock. It consists in conceiving the Book as an extension of the Life of the author. In this "eidetic" conception of writing, "Life" would be the equivalent of that "meaning" that has the "Book" as "significant". In this sense, the poetic self-declaration of the aforementioned poem entitled "Woman No. This statement belongs doubly to the plane of the ghost. In either case, death is called to function as a "guarantor" of a self-perception of the poetic self, who confesses in this way "fantasmatized". In the psychoanalytic perspective, the notion of ghost has found multiple developments since its original elaboration by Freud. What is at stake in this logic of transparency is nothing other than the same theoretical status of the I conceived in continuity or contiguity with the You. Since it is the "discursive body" of the phantom that operates the substitution of the real I for the poetic I-self in the poem, it can not be named without implicitly affecting the theoretical virtuality of its corresponding you-recipient. In other words: where the poem says "my body", the "transparent" body so summoned designates a ghostly instance where the value of the possessive "my" loses its effect, which becomes "you" without ever becoming "our". In that way, the named body becomes equivalent to the body desired, and in the narcissic circularity thus proposed, the secret value of the desired "transparency" of a writing with a mirror vocation is discovered. An imaginary, labyrinthine topology is what is outlined behind this formulation of two bodies superimposed on the plane of the phantom. Call it "life", if it is necessary to name it, even though or since this designation is as insufficient and precarious as any other. Other notions, such as "spiritual filter", "existential precipitate", "desiring mechanism" could designate just a few of the meanings that the term "life" above proposed. His is a poetic of the Self inscribed in what the Germans call the "erlebte rede", or lived discourse. Not exactly "autobiographical", but closer to the program than to the purely existential. This is because, in poetry, as in life, what is sought is always more important than what is found. Paris: Éditions du Seuil, 1966. The appointment corresponds to the translation of the French by José Bianco. The free translation of this quote given above is my responsibility. A proof that this rumor is present among the possible lines that make up the plane of its thetical intentionality is provided by the same incipit of the third poem of the book, whose title: "Torrente", is in itself a true program. Let's read: The river is not repeated, the fire is not repeated. After the pre-Socratic philosophers, perhaps the Renaissance poets were the first to have shown the "liquid" character of the voice. However, the connection of meaning that affects many of the self-styled "poets" belonging to the contemporary period prevents them from capturing that flow, and denies them any possibility of transmitting it to their creatures. In fact, it is precisely the term voice that appears commented by the elliptical incision that is subordinated as if it were equivalent to stream flow and early mornings. The continuity of this association between the voice and the flow is also one of the crucial supports of the work of renérodríguezsoriano in Rumor de pez. But perhaps it should be remembered here that the dreamed matter does not necessarily have to retain the same properties of its correlative in the physical world. In its constant flow, the poetic voice sits rhythmically in the language, but practically at the same time, it ceases to be "an" individual voice to become more an impersonal saying than in the "want to say" of someone in particular. Thus, the poem is not what someone "wants to say": it is what the poem says. However, this intransitive saying is completely transformed when it becomes a reading, and thus it is explained that a same poem can be read a thousand times and each time a different reading is obtained, each one of them successful. It is not necessary, then, to read "with a magnifying glass" the poems of Rumor de Pez and to stop to observe the interesting fluctuation of the vowel-consonantal game that is established in its verses to be trapped by its spell. To do so, on the other hand, is to participate in that party of language that constitutes poetic writing, and to be able to see how the same sounds are able to enunciate totally different senses. This intermittence is neither before nor after the construction of meaning in reading, but radically simultaneous. In fact, in the semantic plane, we find a similar recurrence between the different sequences that make up the poem. This process could also be seen as the result of a loss: the self-poetic that assumes itself "drained" in the first verse would be in effect confirming a change of subtle condition from the most watery to the most aerial. If so, the "fish that rumored" in this book is perhaps that inhabitant of pelagic oblivion, an almost non-being who writes from his own condition as a forgotten creature that he nonetheless remembers. Enriquillo Sánchez From your silence to my silence there is an abyss. The anguish is a bridge with broken beams. Thirst, a blind pitcher and the desgaire downstream. A bird aimlessly flies the night deep. Mute and deaf a fish is lost in the corner of your lips. From my lips to your lips there is a story. A story that ends in the very word of the beginning. From your silence to my silence there is a clock. A needle that digs into the silence on purpose. A dagger wounded by the absence of your light. Let us call this forgetting in any way, because it is clear that no name could translate its generic substantivity, since the state to which we allude assumes in each subject and each time a different form. It is fundamentally because of this that any postulation of a supposed "poetic of thinking" is hollow and vain. That yes, however, for it to have "forgotten" is essential that there has been something in someone's consciousness before. No procedure, no substance, no contraption can ever produce a poem worthy of that name in the conscience of those who have not been concerned before to clarify time in the millions of alveoli of meaning. Only oblivion can erase what we think we know about the world and give it back to us, placing in our hands that virginal sameness that is like that of sounds in a true poem. Well said Louis Aragon: "my true biography is in my poems." And so it goes, since there is in The feast. A bold and constant effort to be marked, in the most Lacanian sense of the word, crosses the writing of Gómez Rosa from his first poems to the last. And without a doubt, it is this personal "brand image" that gives his work an unusual singularity in the midst of the contemporary Latin American poetic maëlstrom. It remains true, however, that any attempt to penetrate the secret imaginary laboratory of a poet requires some collaboration on the part of those who propose such a feat. In effect, the value of any reading depends to a great extent on the questions that determine its heuristic functioning. Worse yet, the reading conceived as "posteriority" presupposes a supposed "anteriority" of the textual sense, true matrix of all sorts of intransigence, fanaticism and extremism whose infamies are full of the history of humanity. Honor unenviable in these times, certainly, but honor, after all. Died in each book that has given the stamp, the need to be reborn is undoubtedly the key to his persistence in the office of poet. But it is not the death of what to take care of if we really want to know who Alexis is doing poetry. And this is how he is shaping his poems until he has completed the fourteen titles that are included in this compilation of his opera omnia. In fact, in vain will be searched in the poems of Gómez Rosa arguments that validate any hypothesis related to another paradigm other than the war of April 1965. From his first poems, his writing propends and aspires spontaneously to a kind of horizontal verbal that confines any type of verticality. This horizontality to which I refer is based almost entirely on the dialogical functioning of a large number of poems by Gómez Rosa. The communicative positioning implicit in the "tuteo" and its deictic equivalents is not a factor that can be disdained when considering the operation of a poetic text in the set of aesthetic planes that cross and cross it. The need to understand the horizontality of Gómez Rosa's poetics is crucial. Long and sorrowful has turned the road to your village: long and difficult. in the nights of clerén and of deserted pailas, Quasimodo, in the mouth of the children, round streets that paid to your memory small lights of guardian pencils. The position of Alexis in this poem seems to "clone" partially that of the great Dominican poet. The commitment of this post-mortem solidarity with Lamouth is crucial for the poetic me, who thus marks his desire to be placed in the scene where the myth of the urban poet is built indefinitely. The poet was aware of this circumstance from the beginning, and experienced from an early stage the intuition that it was necessary to circumvent that which, clearly, could limit the scope of his poetics. He thus imagined the procedure that has been tried to systematize as far as possible in the present edition of The Feast. The virtuosity of Alkan is expressed in that piece in the way in which the subject is transformed, through real technical feats, into something totally different in each case. The difficulty of radiographing "his" style does not constitute proof of his stylistic vagueness. The pretended uniqueness that sustained the old notion of work was the result of a monistic conception of the human person as an individual. Writing by taking the work as reference can only lead to assume the stylistic precept that Buffon summed up in his famous phrase "style is man". On the other hand, writing taking into account the notion of text implies starting from a conception of the human person as the subject of a permanent and constant construction. This multiplies to the infinite the possibility of reformulating the own and foreign being in each reception production. The paradigmatic value of the style gives way and collapses before the syntagmatic weight of the constant elaboration of the verbal chain. What works is the style Denying the concept of "work" as significant macro structure does not even remotely deny the style. On the contrary, what this position postulates is a conception of style as a principle of autogenesis, in spite of the conventional tendency to conceive it under the traits of a certain phylogenesis. In his poems, Alexis rewrites the Dominican and Spanish-American orality with a fluency that demonstrates a significant pre-disposition to the different modulations of the verbal phenomenon. Alexis himself associates this configuration of the expressive level of his texts with the neo-baroque writing trend. The meaning of this policy was none other than to give to see the mysteries of faith, promoting a grandiloquent and highly elaborated representation of passages of sacred history of great impact on the imaginary of the time. The personal history that is inoculated in the poems of Alexis would thus constitute, according to my hypothesis, the enunciative equivalent of the other "sacred" story whose representation was generalized within the Baroque framework. In this way, the systematic cult of the personal image would come to be one of the best defined features of Alexis' "style". In the plane of the dispositio, the baroque rhetoric of Alexis would be characterized by the putting in honor of the syntax as generating apparatus of the representative and evocative force of the poem. The subordination of these elements to a syntactic reordering through multiple hyperbaric, anacolutic and other anaphoric procedures, is what usually determines the compositional work of his poems. From the same pig, devilish ribs, worthy of the master Rigas Kapattos. Rice with parsley and baked potatoes to moisten the words: Cabernet Sauvignon and the economical tap. Nostalgia of the night, the spoon of the poet Mateo Morrison. As can be seen in this poem, the poet uses a dinner among friends as a "pretext" to put together a textual model where the anecdotal of the matter is presented to us practically unarmed by way of a broken syntax. Although the elliptical syntax of the poem has as its axis the subject of the dinner, its accumulative movement leads precariously towards a semantic closure of a predicative type due to the predominance of noun phrases that bulge the semic charge of the poem. We find this same procedure in many of the texts that make up the published work of Alexis. Conclusion The feast complete works. Poet of strength extraordinary and dazzling poems of unequal invoice ". Eliot, where once there was only sand and sea, have put lupanares of cement and cocaine, in short, they have turned the world into a latrine made in the image and likeness of their own houses. It is also in the middle of that apocalyptic scenario where the redeemer rises and three times Saint Gillette, the definitive one. His sharp kisses point towards a vital horizon capable of extracting metaphysics from the most immediate political bofe. A tiredness like dry skin is what falls out of our hands to open certain books that still continue to be poetic. Second navajazo: Theory and practice of the "sow-macho" In contemporary Dominican poetry, the production of teratological signs has almost no fabric to cut. However, there is no doubt that it is only from the awful aesthetic of the eighties that such signs become possible, and even necessary, as textual exponents of a sociocultural universe experienced by the subjects from a critical perspective. However, despite this precocity, we must agree that this first invention was followed by a huge vacuum. This is one of the reasons why the sow-male of Pastor de Moya deserves special attention. It could be argued that the location of this index at the end of the volume is an indication that the author does not attach particular importance to the functioning of his text from the point of view of reading. The literary text is therefore always "collage" because it is always cut. And the seemingly eccentric gesture of the "intertextual collage" does nothing more than highlight this fatality of fragmentation of representation and this potency of linkage of poetic writing ». In the following section we will observe some of the implications of the enunciative apparatus of La piara. Third razor: Physics and metaphysics of the meat of the "death": another malaise in a different culture It is known that no reading can exhaustively consume the textual body. But first of all, it is convenient to specify the meaning we will give to the terms of this opposition. And it is precisely here where the theorization of Patrick Charaudeau intervenes. Indeed, Charaudeau assumes the semi-linguistic perspective to construct a theory of verbal communication of wide scope, based on a previous reflection on the legal and institutional aspects of discourse. From this double point of view, the reading of a text becomes an exchange of intentions. For the moment, let's observe that in the text of La piara the enunciative marks of the 3rd person of the singular predominate over all the others. The descriptive functioning of a large part of the thetic chains where these figures appear could be considered an indication of intentionality constitutive of the writing project of the IoE. Now, the problem is that this descriptive operation is not exempt from deviations and even perversions. Notice in passing that the bold letters that affect the death designator in this fragment function as marks of the enunciation. But in a general sense, everything in this fragment points towards the staging of the miracle of the transubstantiation of the flesh into cultural value by the IoE. Indeed, as will be seen almost below, the work of mimesis that is proposed in this sequence hides a change in the symbolic value of the main referent. Throughout the sequence, action verbs are rarely used, and when this happens, their conjugation in present time reveals the descriptive intentionality that determines their functioning. The most remarkable thing, on the other hand, is the particular work of the designing apparatus, to which the IO assigns it the mission of leading the change of statute of the pig until elevating it to the category of person. The latter is flagrant in the second fragment of the sequence: «Said edible character, gilded by the fire of the firewood and the vigil, rests in a coffin. His only accessories are: a tie or bowtie an apple in his mouth, cottons that cover his nostrils and some lenses to protect him from the sun or life's brightness ». Returning to Artaud's terminology, the passage from a physics to a metaphysics of the text will be noted as far as the correlation relation that the reading establishes between the "dead" and "edible character" designators is concerned. According to this metaphysics, the body of the roasted pork will gradually change its nature, in the same measure in which this work of personification to which I refer is installed as a new textual value in reading. The game with the implicit ones provides here one of the keys of the semiotic work of Pastor de Moya: the "edible character" not only "rests in a coffin", but is represented with all the pomp required by his transitory status of symbolic dead. Very quickly, the text seems to turn towards a teratological delirium, in full allegorical anarchy. The "characters" thus named and described seem cut out of one of those Saturday frivolity magazines graciously given by our main newspapers: "He arrives escorted by his mother who is a formal and half dirty lady. She is dressed immaculately in black, from the waist up, in the style of the old rezadoras, with a mantilla and everything. He is also wearing pince-nez. But from the waist down she dresses high heels and black pantyhose, similar to one of the Playboy bunnies ». In this fable of the burial of the pig, there are no mourners, but rejoicing. Everything is double, here, because the same enunciator uses a double language, which guides the reading to detect the double meaning. This is also the case of the ironic figuration of the "double moral" of the sow mother, evidenced by her ambiguous "dress", half dressed-half half-naked. They begin to descend as if attending a funeral procession of a famous public official or minor politician. The procession walks in short steps. Only that polarity breaks the swing of the pennants made with multicolored papers that give the feeling of a burial or baquiní of a very poor and very dear child ». Even so, the ironic ambiguous work that predominates in the fragment remains intact. Behind, two or three trumpeters play the notes of a sacred and lugubrious march: these look like retired musicians who once belonged to a municipal music band from a forgotten town. there are those who would bet to cost their left hand to assert it. others think I was born that way, that I am a drunk sperm, I realize when I pass through a door or any corner, people look at me sadly. It's true, I've gone through life tumbling. the moment will come when I will explain everything. even if it is only you ». At this point, it is necessary to ask what is the status of this curse that the text of La piara calls as one of its possible reading horizons, in the understanding that it is not inserted under any concept in any project of " avant-garde "contemporary poetics. No one should be surprised at this apparent "anachronism." In fact, "cursed" and "avant-garde" projects have in common that they can only be fundamentally anachronistic, by virtue of being both utopian and utopian at the same time. In effect, the first gesture made by the being endowed with the degree of theatricality enough to take poetry seriously is to deny their time and their society or the sector that they identify as contrary to their interests. Not only because it is one of those rare pearls capable of breaking any necklace that tries to link them, but because it will look in vain, in the history of Dominican letters, a formal matrix with which it is feasible to establish symmetry relations with La piara. What in a given time belongs to the History should not bother anyone. Thus, the cultural distance that separates the cultural environment of the eighties from that of the decade of 2010 is an unavoidable fact. Thus, the curse of Pastor is the symptom of another malaise in a different culture, a malaise made from a sum of multiple indifferences. Even his apparent aggressiveness is nothing other than awareness of that capital indifference, which confines any work to remain indefinitely on the plane of intrascendence. In the midst of this bleak picture of indifference, a text like La piara confronts a culture that never before as in our time deserved more the qualification of cannibal. In addition, they offer and sell meat of this one to the curious ones and thus they begin the great tasting, knowing that each one only must eat meat and skin, and soon to place the despoliation and the bones in a corner very separated of the deceased ». An erotic of the food is hidden behind the delight with which Pastor gives himself to describe the "tasting". It is hardly surprising, then, that the poetic self has a destinist sense of history. All the closed and unidirectional worlds have in common their illusory character, because if something characterizes everything that can be considered real it is its radically contradictory nature. The political validity of the illusion is the same as that of the lie: it needs constant renewal, or it inevitably ends by annulling itself. At least in the field of literary writing - not so in the field of jurisprudence - the notion of "plagiarism" has long since lost much of its ancient anathema value, to become a technique or mechanism of production of new senses. Obviously, for each Lautréamont that manages to build a truly important text from the use of these techniques, several tens of thousands of writers will not be able to overcome the status of vulgar plagiarists. Take into account, however, that one could say practically the same thing about the use of any of the literary composition techniques currently available. These records confirm the performance of rare behaviors in the daily life of birds and reptiles, after having suffered blows to the front of their brain. These observations served as the basis for the first trials, both in humans and in other animals. " 46- Last night -when we were sleeping- a young woman came to me who was walking naked all over my body and dragging two fine snakes that hung like earrings on the tips of her tits - pg. 47- The general tone of the "ice-pick" sequence is deliberately encyclopedic, and its scheme is the conventional tripartite: introduction - development - conclusion. As can be observed in the previous example, the procedure used by YOc consists of assuming a strategically neutral "style", with sentences that present the normal syntactic structure. Another strategy employed by YOc is what we might call schematization. Before going on to illustrate this last one, I would like to clarify that I do not intend to suggest that I have detected the "source text" from which Pastor borrowed the ideas that he later expressed in this area of ​​his book. The original leucotomy was a crude operation and the practice was soon developed into a more accurate, more precise procedure where "only" lesions "very small" were produced in the brain ». Despite this, no one can claim that there is "plagiarism" in this procedure because the second text is a reworking of the first. The text of the "Entremés" presents the "monster" in question as a creature with "the round face of a female, the neck of a horse, the body of a feather, the tail of a peje". The misogyny of one of the characters, called symptomatically Oedipus, leads him to affirm that: "the property of which animals are enclosed in the woman." In the following, only the page number will be indicated. the same Artaud was aware of the risk of falling into anarchy once the logical chains that hold words to things are broken. According to Artaud: "it is understood that poetry is anarchic insofar as it questions again all the relations of object to object and forms with their meanings. It is also anarchic insofar as its appearance is the consequence of a disorder that brings us closer to chaos. " The citation of this definition on the home page of the blog that is accessed through this address is nothing more than a symptom. At least in the contemporary period, every poem is always written from two different but not mutually exclusive margins. the second margin is of epistemological type and is a direct consequence of the first: no one can write poetry in our time if he does not know in advance against which of the innumerable traditions of writing his text will rise. and this prior knowledge must necessarily be better defined, the greater the textual efficacy to which it aspires to arrive. the previous thing, that for obvious reasons it would be better located in a footnote, only it looks for to locate the double perspective from which I approach here to the poetry of alexis gómez-rose. what follows is, among other things, an attempt to justify this hypothesis. left margin: the problem of time since the late 1960s, alexis gómez-rosa stands out as one of the most talented poets of his class. not without controversy, his passage through the group "the torch", one of the multiple formations that emerged in the Dominican capital after the end of the 1965 war, was crucial in the construction of his poetic personality. as it will be seen, it is, in fact, this kind of mythical time from where the poetic voice of alexis gómez seems to emanate precisely in many of the marginal poems of a language that pursues its form. What we do care about is trying to situate the type of pact that the gómez-rosa poet established with his contemporaneity. on this particular, as on many others, it is to his writing that we must interrogate. hence the radically marginal character of that topos in which the poet spontaneously entrenches himself to accept the ubiquitous homelessness that is orality. poems that are a spell and at the same time an obsession: if gómez-rosa "listens" is not to become that somewhat pretentious identity that they now call "witness", but by pure fascination and even true identification ». Instead of trying to give a positive answer to that question, I think it preferable to consider it as one of the unknowns to which all readers of the poems of Alexis Gomez-Rosa should try to clear in their own way. For my part, in an interpretative way, I limit myself to consider the nostalgic psychological field that the poet establishes in most of the texts that make up this book as a revealer of his openly subjective status. right margin: the problem of tradition, what tradition of writing are opposed by the texts of Alexis Gómez-Rosa in marginal language that pursues its form? let us, then, turn away from antiques like that of the old opposition between a "recitative" poetry and another "colloquial" poetry and let us reflect on the immediate poetic referents in terms of twentieth-century Dominican poetry. however, we do not understand by the latter the series of labels that the increasingly premeditated academic easyness imposes us, but the main agents -men and women- whose writing practice controversially marked the panorama of Dominican poetry in each one of those periods. this is not the place to discuss the limits and scope of this reflection of lawns. a common detail to the three is that they address in their works the space of the poem from a playful perspective where humor and parody systematically disarticulate the conventional strategies that usually inoculate the lines in the poem of discursive-reflexive writing. Inclusive: his writing practice summons a type of reader capable of recognizing and validating the set of referential marks of that to what we can call the poet's generational memory. Exclusive: this same strategy of writing indirectly confines the sector of the public unable, for reasons of age, to assume as their own these mnemonic traces.

Articles like Do i italicize the title of my essay

Articles by Letters:

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9