Dr. George Pieczenik (born 1944)Â ( ... )Â
Dr. Francis Sellers Collins (born 1950)Â Â ( [HK007C][GDrive]Â )Â
Charles Peter DeLisi (born 1941) Â ( ... )Â
Robert Louis Sinsheimer (born 1920) Â ( ... )_Â
Dr. Renato Dulbecco (born 1914) Â ( .. )_
 Mentions :Â
The Human Genome Project (HGP) was an international scientific research project with the goal of determining the base pairs that make up human DNA, and of identifying and mapping all of the genes of the human genome from both a physical and a functional standpoint.[1] It remains the world's largest collaborative biological project.[2] Planning started after the idea was picked up in 1984 by the US government, the project formally launched in 1990, and was declared complete on April 14, 2003.[3]
Funding came from the American government through the National Institutes of Health (NIH) as well as numerous other groups from around the world. A parallel project was conducted outside the government by the Celera Corporation, or Celera Genomics, which was formally launched in 1998. Most of the government-sponsored sequencing was performed in twenty universities and research centres in the United States, the United Kingdom, Japan, France, Germany, and China.[4]
The Human Genome Project originally aimed to map the nucleotides contained in a human haploid reference genome (more than three billion). The "genome" of any given individual is unique; mapping the "human genome" involved sequencing a small number of individuals and then assembling to get a complete sequence for each chromosome. Therefore, the finished human genome is a mosaic, not representing any one individual.
History
The Human Genome Project was a 13-year-long, publicly funded project initiated in 1990 with the objective of determining the DNA sequence of the entire euchromatic human genome within 15 years.[5]
In May 1985, [Robert Louis Sinsheimer (born 1920)] organized a workshop at the University of California, Santa Cruz, to discuss sequencing the human genome,[6] but for a number of reasons the NIH was uninterested in pursuing the proposal. The following March, the Santa Fe Workshop was organized by [Charles Peter DeLisi (born 1941)] and David Smith of the Department of Energy's Office of Health and Environmental Research (OHER).[7] At the same time [Dr. Renato Dulbecco (born 1914)] proposed whole genome sequencing in an essay in Science.[8] James Watson followed two months later with a workshop held at the Cold Spring Harbor Laboratory. Thus the idea for obtaining a reference sequence had three independent origins: [Robert Louis Sinsheimer (born 1920)], [Dr. Renato Dulbecco (born 1914)] and [Charles Peter DeLisi (born 1941)]. Ultimately it was the actions by DeLisi that launched the project.[9][10][11][12]
The fact that the Santa Fe workshop was motivated and supported by a Federal Agency opened a path, albeit a difficult and tortuous one,[13] for converting the idea into public policy in the United States. In a memo to the Assistant Secretary for Energy Research (Alvin Trivelpiece), [Charles Peter DeLisi (born 1941)], who was then Director of the OHER, outlined a broad plan for the project.[14] This started a long and complex chain of events which led to approved reprogramming of funds that enabled the OHER to launch the Project in 1986, and to recommend the first line item for the HGP, which was in President Reagan's 1988 budget submission,[13] and ultimately approved by the Congress. Of particular importance in Congressional approval was the advocacy of New Mexico Senator Pete Domenici, whom DeLisi had befriended.[15] Domenici chaired the Senate Committee on Energy and Natural Resources, as well as the Budget Committee, both of which were key in the DOE budget process. Congress added a comparable amount to the NIH budget, thereby beginning official funding by both agencies.
Alvin Trivelpiece sought and obtained the approval of DeLisi's proposal by Deputy Secretary William Flynn Martin. This chart[16] was used in the spring of 1986 by Trivelpiece, then Director of the Office of Energy Research in the Department of Energy, to brief Martin and Under Secretary Joseph Salgado regarding his intention to reprogram $4 million to initiate the project with the approval of Secretary Herrington. This reprogramming was followed by a line item budget of $16 million in the Reagan Administrationâs 1987 budget submission to Congress.[17] It subsequently passed both Houses. The Project was planned for 15 years.[18]
Candidate technologies were already being considered for the proposed undertaking at least as early as 1979; Ronald W. Davis and colleagues of Stanford University submitted a proposal to NIH that year and it was turned down as being too ambitious.[19][20]
In 1990, the two major funding agencies, DOE and NIH, developed a memorandum of understanding in order to coordinate plans and set the clock for the initiation of the Project to 1990.[21] At that time, David Galas was Director of the renamed âOffice of Biological and Environmental Researchâ in the U.S. Department of Energy's Office of Science and James Watson headed the NIH Genome Program. In 1993, Aristides Patrinos succeeded Galas and [Dr. Francis Sellers Collins (born 1950)] succeeded James Watson, assuming the role of overall Project Head as Director of the U.S. National Institutes of Health (NIH) National Center for Human Genome Research (which would later become the National Human Genome Research Institute). A working draft of the genome was announced in 2000 and the papers describing it were published in February 2001. A more complete draft was published in 2003, and genome "finishing" work continued for more than a decade.
The $3 billion project was formally founded in 1990 by the US Department of Energy and the National Institutes of Health, and was expected to take 15 years.[22] In addition to the United States, the international consortium comprised geneticists in the United Kingdom, France, Australia, China and myriad other spontaneous relationships.[23] The project ended up costing less than expected at about $2.7 billion (FY 1991).[4] When adjusted for inflation, this costs roughly $5 billion (FY 2018).[24][25]
Due to widespread international cooperation and advances in the field of genomics (especially in sequence analysis), as well as major advances in computing technology, a 'rough draft' of the genome was finished in 2000 (announced jointly by U.S. President Bill Clinton and British Prime Minister Tony Blair on June 26, 2000).[26] This first available rough draft assembly of the genome was completed by the Genome Bioinformatics Group at the University of California, Santa Cruz, primarily led by then-graduate student Jim Kent. Ongoing sequencing led to the announcement of the essentially complete genome on April 14, 2003, two years earlier than planned.[27][28] In May 2006, another milestone was passed on the way to completion of the project, when the sequence of the very last chromosome was published in Nature.[29]
The institutions, companies, and laboratories in Human Genome Program are listed below, according to NIH:[4]
No. /  Nation /  Name /  Affiliation /
1 /  /  The Whitehead Institute/MIT Center for Genome Research  /  Massachusetts Institute of Technology
2 /  / The Wellcome Trust Sanger Institute  /  Wellcome Trust Â
3  /  /  Washington University School of Medicine Genome Sequencing Center /  Washington University in St. Louis
4 /  /  United States DOE Joint Genome Institute /  United States Department of Energy
5 /  /  Baylor College of Medicine Human Genome Sequencing Center /  Baylor College of Medicine
6 /  /  RIKEN Genomic Sciences Center /  Riken
7Â / Â / Â Genoscope and CNRS UMR-8030Â / Â French Alternative Energies and Atomic Energy Commission
8 /  /  GTC Sequencing Center /  Genome Therapeutics Corporation, whose sequencing division is acquired by ABI Â
9  /  /  Department of Genome Analysis /  Fritz Lipmann Institute, name changed from Institute of Molecular Biotechnology
10  /  /  Beijing Genomics Institute/Human Genome Center /  Chinese Academy of Sciences
11 /  /  Multimegabase Sequencing Center /  Institute for Systems Biology
12 /  /  Stanford Genome Technology Center /  Stanford University
13 /  /  Stanford Human Genome Center and Department of Genetics /  Stanford University School of Medicine
14 /  /  University of Washington Genome Center /  University of Washington
15 /  /  Department of Molecular Biology /  Keio University School of Medicine
16  /  /  University of Texas Southwestern Medical Center at Dallas /  University of Texas
17  /  /  University of Oklahoma's Advanced Center for Genome Technology /  Dept. of Chemistry and Biochemistry, University of Oklahoma
18  /  /  Max Planck Institute for Molecular Genetics /  Max Planck Society
19 /  / Lita Annenberg Hazen Genome Center /  Cold Spring Harbor Laboratory
20 /  / GBF/German Research Centre for Biotechnology  /  Reorganized and renamed to Helmholtz Center for Infection Research
Additionally, beginning in 2000 and continuing for three years in Russia, the Russian Foundation for Basic Research (RFFI) (Russian: Đ ĐŸŃŃĐžĐčŃĐșĐžĐč ŃĐŸĐœĐŽ ŃŃĐœĐŽĐ°ĐŒĐ”ĐœŃалŃĐœŃŃ ĐžŃŃĐ»Đ”ĐŽĐŸĐČĐ°ĐœĐžĐč (РЀЀĐ)) provided a grant of about 500 thousand rubles to fund genome mapping of Russians (three groups: Vologda-Vyatka (Russian: ĐĐŸĐ»ĐŸĐłĐŽĐ°-ĐŃŃĐșа), Ilmen-Belozersk (Russian: ĐĐ»ŃĐŒĐ”ĐœŃ-ĐĐ”Đ»ĐŸĐ·Đ”ŃŃĐș), and Valdai (Russian: ĐалЎаĐč)) by the Laboratory of Human Population Genetics of the Medical Genetics Center of the Russian Academy of Medical Sciences (Russian: Đ»Đ°Đ±ĐŸŃаŃĐŸŃОО ĐżĐŸĐżŃĐ»ŃŃĐžĐŸĐœĐœĐŸĐč ĐłĐ”ĐœĐ”ŃĐžĐșĐž ŃĐ”Đ»ĐŸĐČĐ”Đșа ĐДЎОĐșĐŸ-ĐłĐ”ĐœĐ”ŃĐžŃĐ”ŃĐșĐŸĐłĐŸ ŃĐ”ĐœŃŃа Đ ĐŸŃŃĐžĐčŃĐșĐŸĐč аĐșĐ°ĐŽĐ”ĐŒĐžĐž ĐŒĐ”ĐŽĐžŃĐžĐœŃĐșĐžŃ ĐœĐ°ŃĐș). Although the top Russian geneticist in 2004 is Sergei Inge-Vechtomov (Russian: ĐĄĐ”ŃгДĐč ĐĐœĐłĐ”-ĐĐ”ŃŃĐŸĐŒĐŸĐČ), the research was headed by Doctor of Biological Sciences Elena Balanovskaya (Russian: ĐĐ»Đ”ĐœĐ° ĐĐ°Đ»Đ°ĐœĐŸĐČŃĐșаŃ) at the Laboratory of Human Population Genetics in Moscow. Since 2004, Evgeny Ginter is the scientific supervisor of the Medical Genetics Center in Moscow.[30]
State of completion
The project was not able to sequence all the DNA found in human cells. It sequenced only euchromatic regions of the genome, which make up 92.1% of the human genome. The other regions, called heterochromatic, are found in centromeres and telomeres, and were not sequenced under the project.[31]
The Human Genome Project (HGP) was declared complete in April 2003. An initial rough draft of the human genome was available in June 2000 and by February 2001 a working draft had been completed and published followed by the final sequencing mapping of the human genome on April 14, 2003. Although this was reported to cover 99% of the euchromatic human genome with 99.99% accuracy, a major quality assessment of the human genome sequence was published on May 27, 2004 indicating over 92% of sampling exceeded 99.99% accuracy which was within the intended goal.[32]
In March 2009, the Genome Reference Consortium (GRC) released a more accurate version of the human genome, but that still left more than 300 gaps,[33] while 160 such gaps remained in 2015.[34]
Though in May 2020, the GRC reported 79 "unresolved" gaps,[35] accounting for as much as 5% of the human genome,[36] months later the application of new long-range sequencing techniques and a homozygous cell line in which both copies of each chromosome are identical led to the first telomere-to-telomere, truly complete sequence of a human chromosome, the X-chromosome.[37] Work to complete the remaining chromosomes using the same approach is ongoing.[36]
In 2021 it was reported that the Telomere-to-Telomere (T2T) consortium had filled in all of the gaps. Thus there came into existence a complete human genome with no gaps.[38]
The sequencing of the human genome holds benefits for many fields, from molecular medicine to human evolution. The Human Genome Project, through its sequencing of the DNA, can help us understand diseases including: genotyping of specific viruses to direct appropriate treatment; identification of mutations linked to different forms of cancer; the design of medication and more accurate prediction of their effects; advancement in forensic applied sciences; biofuels and other energy applications; agriculture, animal husbandry, bioprocessing; risk assessment; bioarcheology, anthropology and evolution. Another proposed benefit is the commercial development of genomics research related to DNA based products, a multibillion-dollar industry.
The sequence of the DNA is stored in databases available to anyone on the Internet. The U.S. National Center for Biotechnology Information (and sister organizations in Europe and Japan) house the gene sequence in a database known as GenBank, along with sequences of known and hypothetical genes and proteins. Other organizations, such as the UCSC Genome Browser at the University of California, Santa Cruz,[39] and Ensembl[40] present additional data and annotation and powerful tools for visualizing and searching it. Computer programs have been developed to analyze the data because the data itself is difficult to interpret without such programs. Generally speaking, advances in genome sequencing technology have followed Moore's Law, a concept from computer science which states that integrated circuits can increase in complexity at an exponential rate.[41] This means that the speeds at which whole genomes can be sequenced can increase at a similar rate, as was seen during the development of the above-mentioned Human Genome Project.
The process of identifying the boundaries between genes and other features in a raw DNA sequence is called genome annotation and is in the domain of bioinformatics. While expert biologists make the best annotators, their work proceeds slowly, and computer programs are increasingly used to meet the high-throughput demands of genome sequencing projects. Beginning in 2008, a new technology known as RNA-seq was introduced that allowed scientists to directly sequence the messenger RNA in cells. This replaced previous methods of annotation, which relied on the inherent properties of the DNA sequence, with direct measurement, which was much more accurate. Today, annotation of the human genome and other genomes relies primarily on deep sequencing of the transcripts in every human tissue using RNA-seq. These experiments have revealed that over 90% of genes contain at least one and usually several alternative splice variants, in which the exons are combined in different ways to produce 2 or more gene products from the same locus.[42]
The genome published by the HGP does not represent the sequence of every individual's genome. It is the combined mosaic of a small number of anonymous donors, all of the European origin. The HGP genome is a scaffold for future work in identifying differences among individuals. Subsequent projects sequenced the genomes of multiple distinct ethnic groups, though as of today there is still only one "reference genome."[43]
Findings
Key findings of the draft (2001) and complete (2004) genome sequences include:
There are approximately 22,300[44] protein-coding genes in human beings, the same range as in other mammals.
The human genome has significantly more segmental duplications (nearly identical, repeated sections of DNA) than had been previously suspected.[45][46][47]
At the time when the draft sequence was published, fewer than 7% of protein families appeared to be vertebrate specific.[48]
Accomplishments
The first printout of the human genome to be presented as a series of books, displayed at the Wellcome Collection, London
The human genome has approximately 3.1 billion base pairs.[49] The Human Genome Project was started in 1990 with the goal of sequencing and identifying all base pairs in the human genetic instruction set, finding the genetic roots of disease and then developing treatments. It is considered a megaproject.
The genome was broken into smaller pieces; approximately 150,000 base pairs in length.[50] These pieces were then ligated into a type of vector known as "bacterial artificial chromosomes", or BACs, which are derived from bacterial chromosomes which have been genetically engineered. The vectors containing the genes can be inserted into bacteria where they are copied by the bacterial DNA replication machinery. Each of these pieces was then sequenced separately as a small "shotgun" project and then assembled. The larger, 150,000 base pairs go together to create chromosomes. This is known as the "hierarchical shotgun" approach, because the genome is first broken into relatively large chunks, which are then mapped to chromosomes before being selected for sequencing.[51][52]
Funding came from the US government through the National Institutes of Health in the United States, and a UK charity organization, the Wellcome Trust, as well as numerous other groups from around the world. The funding supported a number of large sequencing centers including those at Whitehead Institute, the Wellcome Sanger Institute (then called The Sanger Centre) based at the Wellcome Genome Campus, Washington University in St. Louis, and Baylor College of Medicine.[22][53]
The United Nations Educational, Scientific and Cultural Organization (UNESCO) served as an important channel for the involvement of developing countries in the Human Genome Project.[54]
In 1998, a similar, privately funded quest was launched by the American researcher Craig Venter, and his firm Celera Genomics. Venter was a scientist at the NIH during the early 1990s when the project was initiated. The $300m Celera effort was intended to proceed at a faster pace and at a fraction of the cost of the roughly $3 billion publicly funded project. The Celera approach was able to proceed at a much more rapid rate, and at a lower cost, than the public project in part because it used data made available by the publicly funded project.[45]
Celera used a technique called whole genome shotgun sequencing, employing pairwise end sequencing,[55] which had been used to sequence bacterial genomes of up to six million base pairs in length, but not for anything nearly as large as the three billion base pair human genome.
Celera initially announced that it would seek patent protection on "only 200â300" genes, but later amended this to seeking "intellectual property protection" on "fully-characterized important structures" amounting to 100â300 targets. The firm eventually filed preliminary ("place-holder") patent applications on 6,500 whole or partial genes. Celera also promised to publish their findings in accordance with the terms of the 1996 "Bermuda Statement", by releasing new data annually (the HGP released its new data daily), although, unlike the publicly funded project, they would not permit free redistribution or scientific use of the data. The publicly funded competitors were compelled to release the first draft of the human genome before Celera for this reason. On July 7, 2000, the UCSC Genome Bioinformatics Group released a first working draft on the web. The scientific community downloaded about 500 GB of information from the UCSC genome server in the first 24 hours of free and unrestricted access.[56]
In March 2000, President Clinton, along with Prime Minister Tony Blair in a dual statement, urged that the genome sequence should have "unencumbered access" to all researchers who wished to research the sequence.[57] The statement sent Celera's stock plummeting and dragged down the biotechnology-heavy Nasdaq. The biotechnology sector lost about $50 billion in market capitalization in two days.
Although the working draft was announced in June 2000, it was not until February 2001 that Celera and the HGP scientists published details of their drafts. Special issues of Nature (which published the publicly funded project's scientific paper)[45] described the methods used to produce the draft sequence and offered analysis of the sequence. These drafts covered about 83% of the genome (90% of the euchromatic regions with 150,000 gaps and the order and orientation of many segments not yet established). In February 2001, at the time of the joint publications, press releases announced that the project had been completed by both groups. Improved drafts were announced in 2003 and 2005, filling in to approximately 92% of the sequence currently.
In the IHGSC international public-sector HGP, researchers collected blood (female) or sperm (male) samples from a large number of donors. Only a few of many collected samples were processed as DNA resources. Thus the donor identities were protected so neither donors nor scientists could know whose DNA was sequenced. DNA clones taken from many different libraries were used in the overall project, with most of those libraries being created by Pieter J. de Jong's. Much of the sequence (>70%) of the reference genome produced by the public HGP came from a single anonymous male donor from Buffalo, New York (code name RP11; the "RP" refers to Roswell Park Comprehensive Cancer Center).[58][59]
HGP scientists used white blood cells from the blood of two male and two female donors (randomly selected from 20 of each) â each donor yielding a separate DNA library. One of these libraries (RP11) was used considerably more than others, due to quality considerations. One minor technical issue is that male samples contain just over half as much DNA from the sex chromosomes (one X chromosome and one Y chromosome) compared to female samples (which contain two X chromosomes). The other 22 chromosomes (the autosomes) are the same for both sexes.
Although the main sequencing phase of the HGP has been completed, studies of DNA variation continued in the International HapMap Project, whose goal was to identify patterns of single-nucleotide polymorphism (SNP) groups (called haplotypes, or âhapsâ). The DNA samples for the HapMap came from a total of 270 individuals; Yoruba people in Ibadan, Nigeria; Japanese people in Tokyo; Han Chinese in Beijing; and the French Centre dâEtude du Polymorphisme Humain (CEPH) resource, which consisted of residents of the United States having ancestry from Western and Northern Europe.
In the Celera Genomics private-sector project, DNA from five different individuals were used for sequencing. The lead scientist of Celera Genomics at that time, Craig Venter, later acknowledged (in a public letter to the journal Science) that his DNA was one of 21 samples in the pool, five of which were selected for use.[60][61]
In 2007, a team led by Jonathan Rothberg published James Watson's entire genome, unveiling the six-billion-nucleotide genome of a single individual for the first time.[62]
With the sequence in hand, the next step was to identify the genetic variants that increase the risk for common diseases like cancer and diabetes.[21][50]
It is anticipated that detailed knowledge of the human genome will provide new avenues for advances in medicine and biotechnology. Clear practical results of the project emerged even before the work was finished. For example, a number of companies, such as Myriad Genetics, started offering easy ways to administer genetic tests that can show predisposition to a variety of illnesses, including breast cancer, hemostasis disorders, cystic fibrosis, liver diseases and many others. Also, the etiologies for cancers, Alzheimer's disease and other areas of clinical interest are considered likely to benefit from genome information and possibly may lead in the long term to significant advances in their management.[63][64]
There are also many tangible benefits for biologists. For example, a researcher investigating a certain form of cancer may have narrowed down their search to a particular gene. By visiting the human genome database on the World Wide Web, this researcher can examine what other scientists have written about this gene, including (potentially) the three-dimensional structure of its product, its function(s), its evolutionary relationships to other human genes, or to genes in mice or yeast or fruit flies, possible detrimental mutations, interactions with other genes, body tissues in which this gene is activated, and diseases associated with this gene or other datatypes. Further, a deeper understanding of the disease processes at the level of molecular biology may determine new therapeutic procedures. Given the established importance of DNA in molecular biology and its central role in determining the fundamental operation of cellular processes, it is likely that expanded knowledge in this area will facilitate medical advances in numerous areas of clinical interest that may not have been possible without them.[65]
The analysis of similarities between DNA sequences from different organisms is also opening new avenues in the study of evolution. In many cases, evolutionary questions can now be framed in terms of molecular biology; indeed, many major evolutionary milestones (the emergence of the ribosome and organelles, the development of embryos with body plans, the vertebrate immune system) can be related to the molecular level. Many questions about the similarities and differences between humans and our closest relatives (the primates, and indeed the other mammals) are expected to be illuminated by the data in this project.[63][66]
The project inspired and paved the way for genomic work in other fields, such as agriculture. For example, by studying the genetic composition of Tritium aestivum, the world's most commonly used bread wheat, great insight has been gained into the ways that domestication has impacted the evolution of the plant.[67] It is being investigated which loci are most susceptible to manipulation, and how this plays out in evolutionary terms. Genetic sequencing has allowed these questions to be addressed for the first time, as specific loci can be compared in wild and domesticated strains of the plant. This will allow for advances in the genetic modification in the future which could yield healthier and disease-resistant wheat crops, among other things.
At the onset of the Human Genome Project, several ethical, legal, and social concerns were raised in regard to how increased knowledge of the human genome could be used to discriminate against people. One of the main concerns of most individuals was the fear that both employers and health insurance companies would refuse to hire individuals or refuse to provide insurance to people because of a health concern indicated by someone's genes.[68] In 1996 the United States passed the Health Insurance Portability and Accountability Act (HIPAA) which protects against the unauthorized and non-consensual release of individually identifiable health information to any entity not actively engaged in the provision of healthcare services to a patient.[69] Other nations passed no such protections[citation needed].
Along with identifying all of the approximately 20,000â25,000 genes in the human genome (estimated at between 80,000 and 140,000 at the start of the project), the Human Genome Project also sought to address the ethical, legal, and social issues that were created by the onset of the project.[70] For that, the Ethical, Legal, and Social Implications (ELSI) program was founded in 1990. Five percent of the annual budget was allocated to address the ELSI arising from the project.[22][71] This budget started at approximately $1.57 million in the year 1990, but increased to approximately $18 million in the year 2014.[72]
Whilst the project may offer significant benefits to medicine and scientific research, some authors have emphasized the need to address the potential social consequences of mapping the human genome. "Molecularising disease and their possible cure will have a profound impact on what patients expect from medical help and the new generation of doctors' perception of illness."[73]
Mar 7 - Interesting stuff from Brenner .. https://www.nytimes.com/2000/03/07/science/scientist-work-sydney-brenner-founder-modern-biology-shapes-genome-era-too.html?searchResultPosition=29Â
2000 (June 22) - https://www.nytimes.com/2000/06/22/us/rivals-in-the-race-to-decode-human-dna-agree-to-cooperate.html?searchResultPosition=14Â
https://timesmachine.nytimes.com/timesmachine/2000/06/27/issue.html
and page 21 - https://timesmachine.nytimes.com/timesmachine/2000/06/27/issue.htmlÂ
archive/online : https://archive.nytimes.com/www.nytimes.com/library/national/science/062700sci-genome.html?source=post_page---------------------------Â
Seema Kumar, Whitehead Institute /  Saved as PDF : [HE00AM][GDrive]Â
Mentioned : Eric Steven Lander (born 1957) / Kevin Judd McKernan (born 1973) / Human Genome Project / Celera Genomics Corporation / Dr. John Craig Venter (born 1946) /
The Whitehead/MIT Center for Genome Research enjoyed much more than 15 minutes of fame in late June, as the [Human Genome Project] and [Celera Genomics Corporation] announced their first assemblies of the human genome, the genetic blueprint for a human being.
Whitehead was the single largest contributor to the [Human Genome Project], providing roughly a third of all the sequence assembled by the international consortium of 16 laboratories involved.
Whitehead also laid much of the groundwork needed for the project, by scaling up 20-fold and launching the project's final phase -- sequencing the three billion base pairs that make up the human genome. Over the past year or so, Whitehead's sequencing center produced more than one billion base pairs or DNA letters that went toward assembling the "book of life" announced on June 26.
BETTER, FASTER THAN EXPECTED
Production of genome sequence has skyrocketed over the past year, with more than 60 percent of the sequence having been produced in the past six months alone. During this time, the project consortium has been producing 1,000 bases per second of raw sequence -- seven days a week, 24 hours a day.
The consortium's goal for spring 2000 was to produce a "working draft" version of the human sequence, an assembly containing overlapping fragments that cover approximately 90 percent of the genome and that are sequenced in "working draft" form, i.e., with some gaps and ambiguities. The consortium's ultimate goal is to produce a completely "finished" sequence, i.e. one with no gaps and 99.99 percent accuracy. The target date for this ultimate goal had been 2003, but the final, stand-the-test-of-time sequence will likely be produced considerably ahead of that schedule.
The Human Genome Project consortium centers in six countries have produced far more sequence data than expected (more than 22.1 billion bases of raw sequence data, comprising overlapping fragments totaling 3.9 billion bases and providing seven-fold sequence coverage of the human genome). As a result, the working draft is substantially closer to the ultimate finished form than the consortium expected at this stage.
Although the working draft is useful for most biomedical research, a highly accurate sequence that's as close to perfect as possible is critical for obtaining all the information there is to get from human sequence data. This has already been achieved for chromosomes 21 and 22, as well as for 24 percent of the entire genome.
In a related announcement, Celera Genomics announced that it completed its own first assembly of the human genome DNA sequence.
The public and private projects use similar automation and sequencing technology, but different approaches to sequencing the human genome. The public project uses a "hierarchical shotgun" approach in which individual large DNA fragments of known position are subjected to shotgun sequencing (i.e., shredded into small fragments that are sequenced, and then reassembled on the basis of sequence overlaps). The Celera project uses a "whole genome shotgun" approach, in which the entire genome is shredded into small fragments that are sequenced and put back together on the basis of sequence overlaps.
TRIUMPHANT FEELINGS
Behind all the publicity hoopla was the personal triumph and exhilaration felt by every Whitehead person involved with the project. In fact, for most of them, including the eight representatives from the Genome Center who went to a White House ceremony in Washington, the pride and excitement about a job well done far surpassed any appearance on the "Today" show.
[Eric Steven Lander (born 1957)], professor of biology and director of the Whitehead Genome Center, and Lauren Linton, co-director of its sequencing center, as well as sequencing center team leaders were in the White House East Room as President Clinton and Britain's Prime Minister Tony Blair made the historic announcement -- that the "book of life" had been decoded. The room was electric with anticipation as the band played "Hail to the Chief" and announced the President's entrance.
Remarks by President Clinton, Francis Collins (director of the National Human Genome Research Institute) and [Dr. John Craig Venter (born 1946)] (president of [Celera Genomics Corporation]) recognized the work of the thousands of scientists who helped the world reach this milestone.
"We are incredibly happy and feeling a sense of triumph. This is an exciting day, and the credit goes to all the people who worked day and night at a feverish pace both to create the sequencing center and to sequence every last bit of DNA to achieve the goals that we had set for this milestone," said Dr. Linton.
"It's very exciting to be here, to stand here in the White House and be recognized for our accomplishments. It was impressive and overwhelming and totally thrilling," said Nicole Stange-Thomann, leader of the clone preparation and library construction team.
She and several team leaders from Whitehead, including [Kevin Judd McKernan (born 1973)], Mike Zody, Lisa Kann, Jim Meldrim, Ken Dewar, Will Fitzhugh and Paul McEwan, attended the White House event and the press conference that followed at the Capital Hilton.
MEDIA BLITZ
Back in Cambridge, sequencing center assistant directors Bruce Birren and Chad Nusbaum rallied the troops for a celebration at the Whitehead Genome Center. They also faced huge and unprecedented media interest in the topic, handling dozens of interviews and television broadcasts that followed the announcement. WHDH-TV Channel 7 (the Boston affiliate of NBC) broadcast live from the Whitehead party, and Channels 4 and 56 also descended on the Whitehead sequencing center.
CNN, ABC, NBC, CBS, the Discovery Channel and many other national and international TV stations had prepared in advance, taking footage of the sequencing center and conducting interviews in the past several months, and were ready with stories featuring Whitehead soon after the June 26 announcement.
Whitehead was also featured in the New York Times, the Boston Globe, the Boston Herald, the Washington Post, the Los Angeles Times, Newsday, USA Today, the Wall Street Journal, the Dallas Morning News, Time, the Associated Press and many other newspapers and magazines.
While the media attention focused mostly on the sequencing center, some of it also trickled down to the Genome Center's Functional Genomics Group and the main Whitehead Institute on questions regarding functional genomics and other applications of the genome sequence. Media calls came at a frenzied pace as news outlets frantically tried to get Whitehead scientists to appear on shows on short notice.
MIT Professor of Biology and Whitehead member Richard A. Young appeared on MSNBC; Professor and Whitehead director Gerald Fink was on Greater Boston with Emily Rooney; and [Kevin Judd McKernan (born 1973)] (a team leader at the sequencing center) and David Altshuler (a research scientist at the Genome Center and Harvard endocrinologist) were on the Geraldo Rivera show on CNBC. All this happened within the span of just one day (June 26). Media calls continued to pour in all week as reporters did follow-up stories about the Genome Center's accomplishments.
"We deserve to be proud of our accomplishments and bask in this glory as the world's attention focuses on us. The credit goes to all the individuals at the Whitehead Genome Center who have worked hard to make us the flagship center of the Human Genome Project Consortium. Everyone associated with this project should feel proud," said Professor [Eric Steven Lander (born 1957)],.
Robert Mullan Cook-Deegan
The human genome project began to take shape in 1985 and 1986 at various meetings and in the rumor mills of science. By the beginning of the federal government's fiscal year 1988, there were formal line items for genome research in the budgets of both the National Institutes of Health (NIH) and the Department of Energy (DOE). Genome research budgets have grown considerably in 1989 and 1990, and organizational structures have been in flux, but the allocation of funds through line-item budgets was a pivotal event, in this case signaling the rapid adoption of a science policy initiative. This paper focuses on how those dedicated budgets were created.
  https://www.nap.edu/read/1793/chapter/5#105
https://www.baltimoresun.com/news/bs-xpm-1999-11-17-9911170234-story.html
Lander, the scientist said, sees himself as the "Henry Kissinger" of a potential detente between J. Craig Venter, founder of Celera, and Dr. Francis Collins, chief of the NIH's Human Genome Project. For a while, Venter and Collins belittled each other's scientific strategies in public.
2013-life-out-of-sequence-a-data-driven-history-hallam-stevens-screen-recording.mp4
https://drive.google.com/file/d/1tXgTnDGUo7HpcPYQNmXAky0ptyxwMUrh/view?usp=sharingÂ
2013-life-out-of-sequence-a-data-driven-history-hallam-stevens-img-cover.jpg
https://drive.google.com/file/d/1fP91svIJ3nYTxvIpVZU2f6MqB1b2JY_z/view?usp=sharingÂ
Front Cover
Hallam Stevens
University of Chicago Press, Nov 4, 2013 - Science - 272 pages
Review : "Thirty years ago, the most likely place to find a biologist was standing at a laboratory bench, peering down a microscope, surrounded by flasks of chemicals and petri dishes full of bacteria. Today, you are just as likely to find him or her in a room that looks more like an office, poring over lines of code on computer screens. The use of computers in biology has radically transformed who biologists are, what they do, and how they understand life. In Life Out of Sequence, Hallam Stevens looks inside this new landscape of digital scientific work. Stevens chronicles the emergence of bioinformaticsâthe mode of working across and between biology, computing, mathematics, and statisticsâfrom the 1960s to the present, seeking to understand how knowledge about life is made in and through virtual spaces. He shows how scientific data moves from living organisms into DNA sequencing machines, through software, and into databases, images, and scientific publications. What he reveals is a biology very different from the one of predigital days: a biology that includes not only biologists but also highly interdisciplinary teams of managers and workers; a biology that is more centered on DNA sequencing, but one that understands sequence in terms of dynamic cascades and highly interconnected networks. Life Out of Sequence thus offers the computational biology community welcome context for their own work while also giving the public a frontline perspective of what is going on in this rapidly changing field. "
We purchased on Google (for 44 dollars)
Copy of text (for chatper 1) placed into this text file : 2013-life-out-of-sequence-a-data-driven-history-hallam-stevens-copied-text-ch-1.txt
https://drive.google.com/file/d/1kTHJl4vcVGjbE0SaGvVTYi49ayR1L4KM/view?usp=sharingÂ
Before we can understand the effects of computers on biology, we need to understand what sorts of things computers are . Electronic computers were being used in biology even in the 1950s, but before 1980 they remained on the margins of biologyâonly a handful of biologists considered them important to their work. Now most biologists would find their work impossible without using a computer in some way. It seems obviousâto biologists as well as laypeopleâthat computers, databases, algorithms, and networks are appropriate tools for biological work. How and why did this change take place?
Perhaps it was computers that changed. As computers got better, a standard argument goes, they were able to handle more and more data and increasingly complex calculations, and they gradually became suitable for biological problems. This chapter argues that it was, in fact, the other way around: it was biology that changed to become a computerized and computerizable discipline. At the center of this change were data, especially sequence data. Computers are data processors: data storage, data management, and data analysis machines. During the 1980s, biologists began to produce large amounts of sequence data. These data needed to be collected, stored, maintained, and analyzed. Computersâdata processing machinesâprovided a ready-made tool.
Our everyday familiarity with computers suggests that they are universal machines: we can use them to do the supermarket shopping, run a business, or watch a movie. But understanding the effects of computersâon biology at leastârequires us to see these machines in a different light. The early history of computers suggests that they were not universal machines, but designed and adapted for particular kinds of data-driven problems. When computers came to be deployed in biology on a large scale, it was because these same kinds of problems became important in biology. Modes of thinking and working embedded in computational hardware were carried over from one discipline to another.
The use of computers in biologyâat least since the 1980sâhas entailed a shift toward problems involving statistics, probability, simulation, and stochastic methods. Using computers has meant focusing on the kinds of problems that computers are designed to solve. DNA, RNA, and protein sequences proved particularly amenable to these kinds of computations. The long strings of letters could be easily rendered as data and managed and manipulated as such. Sequences could be treated as patterns or codes that could be subjected to statistical and probabilistic analyses. They became objects ideally suited to the sorts of tools that computers offered. Bioinformatics is not just using computers to solve the same old biological problems; it marks a new way of thinking about and doing biology in which large volumes of data play the central role. Data-driven biology emerged because of the computerâs history as a data instrument.
The first part of this chapter provides a history of early electronic computers and their applications to biological problems before the 1980s. It pays special attention to the purposes for which computers were built and the uses to which they were put: solving differential equations, stochastic problems, and data management. These problems influenced the design of the machines. Joseph November argues that between roughly 1955 and 1965, biology went from being an âexemplar of systems that computers could not describe to exemplars of systems that computers could indeed describe.â 1 The introduction of computers into the life sciences borrowed heavily from operations research. It involved mathematizing aspects of biology in order to frame problems in modeling and data management termsâthe terms that computers worked in. 2 Despite these adaptations, at the end of the 1970s, the computer still lay largely outside mainstream biological research. For the most part, it was an instrument ill-adapted to the practices and norms of the biological laboratory. 3
The invention of DNA sequencing in the late 1970s did much to change both the direction of biological research and the relationship of biology with computing. Since the early 1980s, the amount of sequence data has continued to grow at an exponential rate. The computer was a perfect tool with which to cope with the overwhelming flow of data. The second and third parts of this chapter consist of two case studies: the first of Walter Goad, a physicist who turned his computational skills toward biology in the 1960s; and the second of James Ostell, a computationally minded PhD student in biology at Harvard University in the 1980s. These examples show how the practices of computer use were imported from physics into biology and struggled to establish themselves there. These practices became established as a distinct subdiscipline of biologyâbioinformaticsâduring the 1990s.
The computer was an object designed and constructed to solve particular sorts of problems, first for the military and, soon afterward, for Big Physics. Computers were (and are) good at solving certain types of problems: numerical simulations, differential equations, stochastic and statistical problems, and problems involving the management of large amounts of data. 4
The modern electronic computer was born in World War II. Almost all the early attempts to build mechanical calculating devices were associated with weapons or the war effort. Paul Edwards argues that âfor two decades, from the early 1940s until the early 1960s, the armed forces of the United States were the single most important driver of digital computer development.â 5 Alan Turingâs eponymous machine was conceived to solve a problem in pure mathematics, but its first physical realization at Bletchley Park was as a device to break German ciphers. 6 Howard Aikenâs Mark I, built by IBM between 1937 and 1943, was used by the US Navyâs Bureau of Ships to compute mathematical tables. 7 The computers designed at the Moore School of Electrical Engineering at the University of Pennsylvania in the late 1930s were purpose-built for ballistics computations at the Aberdeen Proving Ground in Maryland. 8 A large part of the design and the institutional impetus for the Electronic Numerical Integrator and Computer (ENIAC), also developed at the Moore School, came from John von Neumann. As part of the Manhattan Project, von Neumann was interested in using computers to solve problems in the mathematics of implosion. Although the ENIAC did not become functional until after the end of the war, its designâthe kinds of problems it was supposed to solveâreflected wartime priorities.
With the emergence of the Cold War, military support for computers would continue to be of paramount importance. The first problem programmed onto the ENIAC (in November 1945) was a mathematical model of the hydrogen bomb. 9 As the conflict deepened, the military found uses for computers in aiming and operating weapons, weapons engineering, radar control, and the coordination of military operations. Computers like MITâs Whirlwind (1951) and SAGE (Semi-Automatic Ground Environment, 1959) were the first to be applied to what became known as C 3 I: command, control, communications, and intelligence. 10
What implications did the military involvement have for computer design? Most early computers were designed to solve problems involving large sets of numbers. Firing tables are the most obvious example. Other problems, like implosion, also involved the numerical solution of differential equations. 11 A large set of numbersârepresenting an approximate solutionâwould be entered into the computer; a series of computations on these numbers would yield a new, better approximation. A solution could be approached iteratively. Problems such as radar control also involved (real-time) updating of large amounts of data fed in from remote military installations. Storing and iteratively updating large tables of data was the exemplary computational problem.
Another field that quickly took up the use of digital electronic computers was physics, particularly the disciplines of nuclear and particle physics. The military problems described above belonged strictly to the domain of physics. Differential equations and systems of linear algebraic equations can describe a wide range of physical phenomena such as fluid flow, diffusion, heat transfer, electromagnetic waves, and radio active decay. In some cases, techniques of military computing were applied directly to physics problems. For instance, missile telemetry involved problems of real-time, multichannel communication that were also useful for controlling bubble chambers. 12 A few years later, other physicists realized that computers could be used to great effect in âlogicâ machines: spark chambers and wire chambers that used electrical detectors rather than photographs to capture subatomic events. Bubble chambers and spark chambers were complicated machines that required careful coordination and monitoring so that the best conditions for recording events could be maintained by the experimenters. By building computers into the detectors, physicists were able to retain real-time control over their experimental machines. 13
But computers could be used for data reduction as well as control. From the early 1950s, computers were used to sort and analyze bubble chamber film and render the data into a useful form. One of the main problems for many particle physics experiments was the sorting of the signal from the noise: for many kinds of subatomic events, a certain âbackgroundâ could be anticipated. Figuring out just how many background events should be expected inside the volume of a spark chamber was often a difficult problem that could not be solved analytically. Again following the lead of the military, physicists turned to simulations using computers. Starting with random numbers, physicists used stochastic methods that mimicked physical processes to arrive at âpredictionsâ of the expected background. These âMonte Carloâ processes evolved from early computer simulations of atomic bombs on the ENIAC to sophisticated background calculations for bubble chambers. The computer itself became a particular kind of object: that is, a simulation machine.
The other significant use of computers that evolved between 1945 and 1955 was in the management of data. In many ways, this was a straightforward extension of the ENIACâs ability to work with large sets of numbers. The Moore School engineers J. Presper Eckert and John Mauchly quickly saw how their design for the Electronic Discrete Variable Advanced Calculator (EDVAC) could be adapted into a machine that could rapidly sort dataâprecisely the need of commercial work. This insight inspired the inventors to incorporate the Eckert-Mauchly Computer Corporation in December 1948 with the aim of selling electronic computers to businesses. The first computer they producedâthe UNIVAC (Universal Automatic Computer)âwas sold to the US Census Bureau in March 1951. By 1954, they had sold almost twenty machines to military (the US Air Force, US Army Map Service, Atomic Energy Commission) and nonmilitary customers (General Electric, US Steel, DuPont, Metropolitan Life, Consolidated Edison). Customers used these machines for inventory and logistics. The most important feature of the computer was its ability to âscan through a reel of tape, find the correct record or set of records, perform some process in it, and return the results again to tape.â 14 It was an âautomaticâ information processing system. The UNIVAC was successful because it was able to store, operate on, and manipulate large tables of numbersâthe only difference was that these numbers now represented inventory or revenue figures rather than purely mathematical expressions.
Between the end of World War II and the early 1960s, computers were also extensively used by the military in operations research (OR). OR and the related field of systems analysis were devoted to the systematic analysis of logistical problems in order to find optimally efficient solutions. 15 OR involved problems of game theory, probability, and statistics. These logical and numerical problems were understood as exactly the sorts of problems computers were good at solving. 16 The use of computers in OR and systems analysis not only continued to couple them to the military, but also continued their association with particular sorts of problems: namely, problems with large numbers of well-defined variables that would yield to numerical and logical calculations. 17
What were the consequences of all this for the application of computers to biology? Despite their touted âuniversality,â digital computers were not equally good at solving all problems. The ways in which early computers were used established standards and practices that influenced later uses. 18 The design of early computers placed certain constraints on where and how they would and could be applied to biological problems. The use of computers in biology was successful only where biological problems could be reduced to problems of data analysis and management. Bringing computers to the life sciences meant following specific patterns of use that were modeled on approaches in OR and physics and which reproduced modes of practice and patronage from those fields. 19
In the late 1950s, there were two alternative notions of how computers might be applied to the life sciences. The first was that biology and biologists had to mathematize, becoming more like the physical sciences. The second was that computers could be used for accounting purposes, creating âa biology oriented toward the collation of statistical analysis of large volumes of quantitative data.â 20 Both notions involved making biological problems amenable to computersâ data processing power. Robert Ledleyâone of the strongest advocates of the application of computers in biology and medicineâenvisioned the transformation of biologistsâ research and practices along the lines of Big Science. 21
In 1965, Ledley published Use of Computers in Biology and Medicine . The foreword (by Lee Lusted of the National Institutes of Health) acknowledged that computer use required large-scale funding and cooperation similar to that seen in physics. 22 Ledley echoed these views in his preface:
Because of an increased emphasis on quantitative detail, elaborate experimentation and extensive quantitative data analysis are now required. Concomitant with this change, the view of the biologist as an individual scientist, personally carrying through each step of his data-reduction processesâthat view is rapidly being broadened, to include the biologist as part of an intricate organizational chart that partitions scientific technical and administrative responsibilities. 23
Physics served as the paradigm of such organization. But the physical sciences also provided the model for the kinds of problems that computers were supposed to solve: those involving âlarge masses of data and many complicated interrelating factors.â Many of the biomedical applications of computers that Ledleyâs volume explored treated biological systems according to their physical and chemical bases. The examples Ledley describes in his introduction include the numerical solution of differential equations describing biological systems (including protein structures, nerve fiber conduction, muscle fiber excitability, diffusion through semipermeable membranes, metabolic reactions, blood flow), simulations (Monte Carlo simulation of chemical reactions, enzyme systems, cell division, genetics, self-organizing neural nets), statistical analyses (medical records, experimental data, evaluation of new drugs, data from electrocardiograms and electroencephalograms, photomicrographic analysis); real-time experimental and clinical control (automatic respirators, analysis of electrophoresis, diffusion, and ultracentrifuge patterns, and counting of bacterial cultures) and medical diagnosis (including medical records and distribution and communication of medical knowledge). 24 Almost all the applications were either borrowed directly from the physical sciences or depended on problems involving statistics or large volumes of information. 25
For the most part, the mathematization and rationalization of biology that Ledley and others believed was necessary for the âcomputerizationâ of the life sciences did not eventuate. 26 By the late 1960s, however, the invention of minicomputers and the general reduction in the costs of computers allowed more biologists to experiment with their use. 27 At Stanford University, a small group of computer scientists and biologists led by Edward Feigenbaum and Joshua Lederberg began to take advantage of these changes. After applying computers to the problem of determining the structure of organic molecules, this group began to extend their work into molecular biology. 28
In 1975, they created MOLGEN, or âApplications of Symbolic Computation and Artificial Intelligence to Molecular Biology.â The aim of this project was to combine expertise in molecular biology with techniques from artificial intelligence to create âautomated methods for experimental assistance,â including the design of complicated experimental plans and the analysis of nucleic acid sequences. 29
Lederberg and Feigenbaum initially conceived MOLGEN as an artificial intelligence (AI) project for molecular biology. MOLGEN included a âknowledge baseâ compiled by expert molecular biologists and containing âdeclarative and procedural information about structures, laboratory conditions, [and] laboratory techniques.â 30 They hoped that MOLGEN, once provided with sufficient information, would be able to emulate the reasoning processes of a working molecular biologist. Biologists did not readily take up these AI tools, and their use remained limited. What did begin to catch on, however, were the simple tools created as part of the MOLGEN project for entering, editing, comparing, and analyzing protein and nucleic acid sequences. In other words, biologists used MOLGEN for data management, rather than for the more complex tasks for which it was intended. By the end of the 1970s, computers had not yet exerted a wide influence on the knowledge and practice of biology. Since about 1975, however, computers have changed what it means to do biology: they have âcomputerizedâ the biologistâs laboratory.
By the early 1980s, and especially after the advent of the first personal computers, biologists began to use computers in a variety of ways. These applications included the collection, display, and analysis of data (e.g., electron micrographs, gel electrophoresis), simulations of molecular dynamics (e.g., binding of enzymes), simulations of evolution, and especially the study of the structure and folding of proteins (reconstructing data from X-ray crystallography, visualization, simulation and prediction of folding). 31 However, biologists saw the greatest potential of computers in dealing with sequences. In 1984, for instance, Martin Bishop wrote a review of software for molecular biology; out of fifty-three packages listed, thirty were for sequence analysis, a further nine for ârecombinant DNA strategy,â and another seven for database retrieval and management. 32 The analysis of sequence data was becoming the exemplar for computing in biology. 33
As data processing machines, computers could be used in biology only in ways that aligned with their uses in the military and in physics. The early design and use of computers influenced the ways in which they could and would be applied in the life sciences. In the 1970s, the computer began to bring new kinds of problems (and techniques for solving them) to the fore in biologyâsimulation, statistics, and large-volume data management and analysis were the problems computers could solve quickly. We will see how these methods had to struggle to establish themselves within and alongside more familiar ways of knowing and doing in the life sciences.
The next two sections provide two examples of ultimately successful attempts to introduce computers into biology. What these case studies suggest is that success depended not on adapting the computer to biological problems, but on adapting biology to problems that computers could readily solve. In particular, they demonstrate the central roles that data management, statistics, and sequences came to play in these new kinds of computationally driven biology. Together, these case studies also show that the application of computers to biology was not obvious or straightforwardâGoad was able to use computers only because of his special position at Los Alamos, while Ostell had to struggle for many years to show the relevance and importance of his work. Ultimately, the acceptance of computers by biologists required a redefinition of the kinds of problems that biology addressed.
Walter Goad (1925â2000) came to Los Alamos Scientific Laboratories as a graduate student in 1951, in the midst of President Trumanâs crash program to construct a hydrogen bomb. He quickly proved himself an able contributor to that project, gaining key insights into problems of neutron flux inside supercritical uranium. There is a clear continuity between some of Goadâs earlier (physics) and later (biological) work: both used numerical and statistical methods to solve data-intensive problems. Digital electronic computers were Goadâs most important tool. As a consequence, Goadâs work imported specific ways of doing and thinking from physics into biology. In particular, he brought ways of using computers as data management machines. Goadâs position as a senior scientist in one of the United Statesâ most prestigious scientific research institutions imparted a special prestige to these modes of practice. Ultimately, the physics-born computing that Goad introduced played a crucial role in redefining the types of problems that biologists addressed; the reorganization of biology that has accompanied the genomic era can be understood in part as a consequence of the modes of thinking and doing that the computer carried from Los Alamos.
We can reconstruct an idea of the kinds of physics problems that Goad was tackling by examining both some of his published work from the 1950s and his thesis on cosmic ray scattering. 34 This work had three crucial features. First, it depended on modeling systems (like neutrons) as fluids using differential or difference equations. Second, such systems involved many particles, so their properties could only be treated statistically. Third, insight was gained from the models by using numerical or statistical methods, often with the help of a digital electronic computer. During the 1950s, Los Alamos scientists pioneered new ways of problem solving using these machines.
Electronic computers were not available when Goad first came to Los Alamos in 1951 (although Los Alamos had had access to comput ers elsewhere since the war). By 1952, however, the laboratory had the MANIAC (Mathematical Analyzer, Numerical Integrator, and Computer), which had been constructed under the direction of Nicholas Metropolis. Between 1952 and 1954, Metropolis worked with Enrico Fermi, Stanislaw Ulam, George Gamow, and others on refining Monte Carlo and other numerical methods for use on the new machine. They applied these methods to problems in phase-shift analysis, nonlinear-coupled oscillators, two-dimensional hydrodynamics, and nuclear cascades. 35 Los Alamos also played a crucial role in convincing IBM to turn its efforts to manufacturing digital computers in the early 1950s. It was the first institution to receive IBMâs âDefense Calculator,â the IBM 701, in March 1953. 36
When attempting to understand the motion of neutrons inside a hydrogen bomb, it is not possible to write down (let alone solve) the equations of motion for all the neutrons (there are far too many). Instead, it is necessary to find ways of summarizing the vast amounts of data contained in the system. Goad played a central role in Los Alamosâ work on this problem. By treating the motion of neutrons like the flow of a fluid, Goad could describe it using well-known differential equations. These equations could be solved by ânumerical methodsââthat is, by finding approximate solutions through intensive calculation. 37 In other cases, Goad worked by using Monte Carlo methodsâthat is, by simulating the motion of neutrons as a series of random moves. 38 In this kind of work, Goad used electronic computers to perform the calculations: the computer acted to keep track of and manage the vast amounts of data involved. The important result was not the motion of any given neutron, but the overall pattern of motion, as determined from the statistical properties of the system.
When Goad returned to his thesis at the end of 1952, his work on cosmic rays proceeded similarly. He was attempting to produce a model of how cosmic rays would propagate through the atmosphere. Since a shower of cosmic rays involved many particles, once again it was not possible to track all of them individually. Instead, Goad attempted to develop a set of equations that would yield the statistical distribution of particles in the shower in space and time. These equations were solved numerically based on theoretical predictions about the production of mesons in the upper atmosphere. 39 In both his work on the hydrogen bomb and his thesis, Goadâs theoretical contributions centered on using numerical methods to understand the statistics of transport and flow. By the 1960s, Goad had become increasingly interested in some problems in biology. While visiting the University of Colorado Medical Center, Goad collaborated extensively with the physical chemist John R. Cann, examining transport processes in biological systems. First with electrophoresis gels, and then extending their work to ultracentrifugation, chromatography, and gel filtration, Goad and Cann developed models for understanding how biologicalhttps://citizensciences.net/bruno-strasser/Â
University of Chicago Press, Jun 7, 2019 - Science - 392 pages
Databases have revolutionized nearly every aspect of our lives. Information of all sorts is being collected on a massive scale, from Google to Facebook and well beyond. But as the amount of information in databases explodes, we are forced to reassess our ideas about what knowledge is, how it is produced, to whom it belongs, and who can be credited for producing it.
Every scientist working today draws on databases to produce scientific knowledge. Databases have become more common than microscopes, voltmeters, and test tubes, and the increasing amount of data has led to major changes in research practices and profound reflections on the proper professional roles of data producers, collectors, curators, and analysts.
Collecting Experiments traces the development and use of data collections, especially in the experimental life sciences, from the early twentieth century to the present. It shows that the current revolution is best understood as the coming together of two older ways of knowingâcollecting and experimenting, the museum and the laboratory. Ultimately, Bruno J. Strasser argues that by serving as knowledge repositories, as well as indispensable tools for producing new knowledge, these databases function as digital museums for the twenty-first century.
Origins of the Human Genome Project: Why Sequence the Human Genome When 96% of It Is Junk?
I was not much involved in the discussion and debate about initiating a program to determine the base-pair sequence of the human genome, until the idea surfaced publicly. As I recall the genesis of the Human Genome Project, the idea for sequencing the human genome was initiated independently and nearly simultaneously by [Robert Louis Sinsheimer (born 1920)], then Chancellor of the University of CaliforniaâSanta Cruz (UCSC), and [Charles Peter DeLisi (born 1941)] of the United States Department of Energy. Each had his own purpose in promoting such an audacious undertaking, but the goals of their ambitious plans are best left for them to tell. The proposal was initially aired at a meeting of a small group of scientists convened by Sinsheimer at UCSC in May 1985 and received the backing of those who attended. I became aware of the project through an editorial or op-edâstyle piece by Renato Dulbecco in Science, March 1986. Dulbeccoâs enthusiasm for the project was based on his conviction that only by having the complete human genome sequence could we hope to identify the many oncogenes, tumor suppressors, and their modifiers. Although that particular goal seemed problematic, I was enthusiastic about the likelihood that the sequence would reveal important organizational, structural, and functional features of mammalian genes.
That conviction stemmed from having seen, firsthand, the tremendous advantages of knowing the sequence of SV40 (in 1978) and adenovirus genomic DNAs (in 1979â1980), particularly for deciphering their biological properties. In each of these instances, as well as for the longer and more complex genomic DNAs of the herpes virus and cytomegalovirus, knowing the sequences was critical for accurately mapping their mRNAs, identifying the introns, and making pretty good guesses about the transcriptional regulatory elements. Even more significant was the ability to engineer precisely targeted modifications to their genomes (e.g., base changes, deletions and additions, sequence rearrangements, and substitutions of defined segments with nonviral DNA). One could easily imagine that knowing the human DNA sequence would enable us to manipulate the sequences of specific genes for a variety of hitherto-undoable experiments.
[ NOTE - See https://sci-hub.se/10.1038/273113a0# or https://pubmed.ncbi.nlm.nih.gov/205802/ - Nature . 1978 May 11;273(5658):113-20. doi: 10.1038/273113a0. ... Complete nucleotide sequence of SV40 DNA :Â W Fiers, R Contreras, G Haegemann, R Rogiers, A Van de Voorde, H Van Heuverswyn, J Van Herreweghe, G Volckaert, M Ysebaert :Â PMID: 205802 DOI: 10.1038/273113a0 ... 1978-05-nature-magazine-complete-nucleotide-sequene-of-sv40-doi-10-1038-27311a0-fier.pdf / https://drive.google.com/file/d/11jPQwVyUL2FOrwos59Jan0pEU7MIGKto/view?usp=sharing ]
Aware of the upcoming 1986 Cold Spring Harbor (CSH) Symposium on the âMolecular Biology of Homo sapiens,â I suggested to Jim Watson that it might be interesting to convene a small group of interested people to discuss the proposalâs feasibility. I thought that such a rump session might attract people who would be engaged by the proposal, and Watson agreed to set aside some time during the first free afternoon. As the attendees assembled, it was clear that the project was on the minds of many, and almost everyone who attended the symposium showed up for the session at the newly dedicated Grace Auditorium. Wally Gilbert and I were assigned the task of guiding the discussion. Needless to say, what followed was highly contentious; the reactions ranged from outrage to moderate enthusiasmâthe former outnumbering the latter by about five to one.
Gilbert began the discussion by outlining his favored approach: fragment the entire genomeâs DNA into a collection of overlapping fragments, clone the individual fragments, sequence the cloned segments with the then existing sequencing technology, and assemble their original order with appropriate computer software. In his most self-assured manner, Gilbert estimated that such a project could be completed in âŒ10â20 years at a net cost of âŒ$1 per base, or âŒ$3 billion. Even before he finished, one could hear the rumblings of discontent and the audienceâs gathering outrage. It was not just his matter-of-fact manner and self-assurance about his projections that got the discussion off on the wrong foot, for there was also the rumor (which may well have been planted by Gilbert) that a company he was contemplating starting would undertake the project on its own, copyright the sequence, and market its content to interested parties.
One could sense the fury of many in the audience, and there was a rush to speak out in protest. Among the more vociferous comments, three points stood out:
The cost of doing this project would diminish federal funding for individual investigatorâinitiated science and thereby would shift the culture of basic biological research from âLittle Scienceâ to âBig Science.â Some feared that biology would experience the same consequence that physics did when massive projects like the Stanford Linear Accelerator Center were undertaken in that field.
Many thought that Gilbertâs approach was boring and thus would not attract well-experienced people, which, most likely, would make the product suspect. Moreover, the benefits of the sequence project might not materialize until the very late stages.
A surprisingly vocal group argued that, because <5% of the DNA sequence was informational (i.e., represented by genes encoding proteins and RNAs), there was no point in sequencing what was unaffectionately labeled âjunkââjunk was defined as all the stuff between genes and within introns. Why, many asked, should we spend a lot of money and effort to sequence what was clearly irrelevant?
The fury of the reactions of some of our most respected molecular geneticists startled me. Several of the speakers argued that certain areas of research, usually their own specialty, were far more valuable than the sequence of the human genome. I was particularly irked by the claims that there was no need to sequence the entire 3 billion base pairs and that knowing the sequences of only the genes would suffice. Frankly, I was shocked by what seemed to me to be a display of what I termed an âarrogance of ignorance.âWhy, I asked, should we foreclose on the likelihood that noncoding regions within and surrounding genes contain signals that we have not yet recognized or learned to assay? Furthermore, wasnât it conceivable that there are DNA sequences for functions other than encoding proteins and RNAs? For example, the DNA sequence might serve for other organismal functions (e.g., chromosomal replication, packaging of the DNA into highly condensed chromatin, or control of development). It seemed surprising and disconcerting to hear that many were prepared to discard, a priori, a potential source of such information, and it was even more surprising that this myopic view persisted both throughout the meeting and for some time afterward.
During the session, I tried to steer the discussion away from the cost issue and the fuzzy arguments about Little Science versus Big Science. Perhaps it was better, I thought, to tempt the creative minds in the audience. After all, this was a scientific meeting with some of the most creative minds sitting in the audience. What if, I said, some philanthropic source descended into our midst and offered $3 billion to produce the sequence of the human genome at the end of 10 years? And, I suggested, assume that we were assured that there would be no impact on existing sources of funding. Would the project be worth doing? If so, how should we proceed with it? Gilbert had offered his approach, but, I asked, are there better ways?
To get that discussion started, I proposed that we might consider sequencing only cloned cDNAs from a variety of libraries made from different tissues and conditions. Knowledge of the expressed sequences would enable us to bootstrap our way to cloning the genomic versions of the cDNAs and, thereby, enable us to identify the introns and the likely promoters. Such an approach, I argued, would very likely yield valuable and interesting cloned material for many investigators to work on long before we knew the entire sequence. The premise was that the effort would identify the chromosomal versions of the expressed sequences and, with some cleverness, their flanking sequences.
However, try as Imight, I could not engage the audience in that exercise. Their concerns were about the price that would be paid by traditional ways of doing science and that many more-interesting and important problems would be abandoned or neglected. The meeting ended with most people unconvinced of the value of proceeding with a project to sequence the human genome
At the end of the meeting, I flew to Basel, Switzerland, where I was part of an advisory group to the Basel Institute of Immunology. At the hotel, I found a group of American and European colleagues perched on the veranda overlooking the fast-flowing Rhine River. They were clearly aware of the discussion at CSH and my participation in it. I again had to defend my support for the sequencing project against arguments that were a repetition of those expressed at CSH.
Soon thereafter, the National Academy of Sciences convened a blue-ribbon committee, many members of which had been among the critical voices at CSH. Their report recast the scope and direction of the project in a more constructive way; the principal change was the proposal to proceed in phases: determine the genetic map by use of principally polymorphic markers, create a physical map consisting of linked cloned cosmids, and focus on developing more cost- and time-efficient means of sequencing DNA. The most important recommendation, in my view, was to include in the project the sequencing of the then-favorite model organisms: Escherichia coli, Saccharomyces cerevisiae, Drosophila melanogaster, Caenorhabditis elegans, and the mouse. It was clear that the new formulation did not threaten research support for those who worked on prokaryotes and lower eukaryotes. More likely, the additional funding would energize research on these organisms. It also provided a livelihood for those interested in mapping their favorite organism and for those committed to cloning and mapping large segments of DNA. In the end, people were mollified by the realization that they would not be left out of the projectâs funding. Also, the proposal had a logic for how to proceed and the acceptance that useful information would be generated long before the project was completed.
Sometime after the project was under way, Watson became the director of the project and set the agenda for how the project would proceed. He was committed to a razor-like focus on the development of genetic and physical maps, discouraging and even dismissing proposals that focused on making the work relevant to the biology. Indeed, that strategy was enforced by the study sections that reviewed genome-project grant proposals; proposals involving methods that would further the two mapping projects received preference, whereas those that hinted at deviation from that goal went unfunded. There is little question thatWatsonâs forceful and committed leadership ensured the projectâs success.
It is interesting, in retrospect, that the course Gilbert had proposed for obtaining the human genome sequenceâ shotgun cloning, sequencing, and assembly of completed bits into the wholeâwas what carried the day. Also, people who had dismissed the necessity of knowing the sequence of the junk now readily admit that the junk may very well be the crown jewels, the stuff that orchestrates the coding sequences in biologically meaningful activities.
The past few years have revealed unexpected findings regarding noncoding genomic sequences, giving assurance that there is much more to discover in the genome sequences. Moreover, understanding the function of the noncoding genome sequences is very likely to accelerate, as the tools for mining the sequence and the application of robust and large-scale methods for detecting transcription become more refined.
PAUL BERG
Stanford University Medical Center
Stanford
https://www.tcracs.org/tcrwp/1about/1biosketch/1sumex-aim/