PerspectiveESSAY

The future of humans as model organisms

See allHide authors and affiliations

Science  10 Aug 2018:
Vol. 361, Issue 6402, pp. 552-553
DOI: 10.1126/science.aau7779

Ten years ago, Nobel laureate Sydney Brenner remarked, “We don't have to search for a model organism anymore. Because we are the model organisms” (1). Indeed, over the past decade, we have deepened our understanding not only of how the genomic blueprint for human biology manifests physical and chemical characteristics (phenotype), but also of how traits can change in response to the environment. A better grasp of the dynamic relationship between genes and the environment may truly sharpen our ability to determine disease risk and response to therapy. A collection of human phenotypic data, and its integration with “omic” information (genomic, proteomic, transcriptomic, epigenomic, microbiomic, and metabolomic, among others), along with remote-sensing data, could provide extraordinary opportunities for discovery. A comprehensive “human phenomic science” approach could catalyze this effort through both large-scale “light” phenotyping studies and “deep” phenotyping studies performed in smaller groups of individuals.

Data integration is already advancing medicine at the individual patient level. The identification of unexpected disease associations with genes (2), the linkage of gene variants to unanticipated human phenotypes (3), and the use of Mendelian randomization (a method to estimate causal effects) to predict biomarker validity (4) or drug response (5) are some examples. Efforts are underway to broaden the diversity of populations studied and to establish standards for phenotyping (6), but the depth and quality of characterization at scale are limited by cost and feasibility.

Complementary to large-scale, comparatively lighter basal phenotype studies are deeper phenotyping studies of smaller groups of individuals in settings where phenotypic responses can be evoked under well-controlled conditions (7). Such studies can address and refine hypotheses generated from data at scale, enabling investigators interested in mechanisms and drug development to seek proof of concept in focused, prospective clinical trials. Deep phenotyping is ideally suited to elucidate gene function and comes close to conducting a “human gene knockout” study (8). The approach can also connect the modifiable causes of common diseases to patients with rare diseases, enabling a better understanding of the pathophysiology of common diseases. For example, detailed characterization of “outlier” patients can direct mechanistic interrogations of common diseases at population scale. One example is the gene encoding proprotein convertase subtilisin/kexin type 9 (PCSK9) and cardiovascular disease (9). The cardioprotective phenotype that was revealed through light phenotyping at scale inferred the validity of PCSK9 as a drug target; this was confirmed in deep phenotyping studies and then in randomized clinical trials. Another example is the autoimmune phenotype of patients with atypical presentations of the myalgic encephalomyelitis/chronic fatigue syndrome (10). Deep phenotyping revealed a mechanistic basis for a symptom complex that is sometimes miscategorized as an affective disorder. Furthermore, the two approaches to phenotyping might be integrated to enable the identification of patients with rare diseases from large phenotypic datasets. This could lead to more efficient recruitment for deep phenotyping studies. A major challenge will be harmonizing protocols for deep phenotyping. However, while moving toward standardization, it may be possible to integrate data when differing protocols are used, as suggested in a recent quantitative analysis of merged yeast proteome datasets (11).

What would this approach mean for animal studies? Understanding the larger picture at the organism level has been restricted by the practical limitation of performing bench studies. By necessity, the sample size is small and the phenotypical variance inherent to the system must be controlled. Despite the availability of resources such as the Collaborative Cross and Diversity Outbred mice (which recapitulate levels of genetic heterozygosity seen in humans), the majority of preclinical studies are still performed with inbred strains of mice housed in artificial environments, with every effort made to obtain phenotypic unity. The “noise” in the system is managed to enable reductionist hypotheses to be tested. However, modeling in mice (even with outbred strains) may insufficiently predict the human condition. With deep genotypic and phenotypic evaluation of large human cohorts, the “physiological noise” due to variability in genetics and environmental exposures is measurable and becomes meaningful. A true big-picture systems biology approach to discovery is feasible, and can be facilitated by techniques that provide the quantity and quality of data required to enable physiological measurement at scale (imaging, monitoring devices, etc.).

Human phenomic science would put a collective tag on a variety of experimental approaches. Examples include harvesting induced pluripotent stem cells to parse mechanistic distinctions in patients with varied syndromes of pain (12), or characterizing distinctions in the phenotypes evoked by vascular, inflammatory, or metabolic stimuli to gain insight into time-dependent disease expression (13). Such studies could be integrated with data from the same patients “in the wild” using remote-sensing devices. This facilitates an intrinsic step in developing precision medicine strategies—comparing interindividual differences in exposure, behavior, and drug response. This might be useful in deciphering “nondipping” hypertension, where there is little information on the efficacy of conventionally formulated antihypertensive drugs, the mechanism involved, and the stability of the phenotype at the individual level, despite the linkage of this syndrome at scale to poor cardiovascular outcomes.

As for drug development, heterogeneous information (physiologic, multi-omics, and imaging data) collected under basal and evoked conditions could be used to interrogate dose-dependent differences in drug response in human phenotypic studies. This information could be integrated with analogous data from other model organisms, allowing more direct linkage to functional outcomes relevant to humans. Some examples include the development of algorithms that predict rare but serious adverse effects of drugs, and the identification of well-defined subgroups that have favorable therapeutic responses to specific medicines. For instance, in assessing nonsteroidal anti-inflammatory drugs (NSAIDs) that differ in selectivity for inhibiting cyclooxygenase-2, one could compare dose-dependent perturbation of the transcriptome, proteome, metabolome, and microbiome across species, with blood pressure as a quantitative surrogate for cardiovascular risk in humans and mice, together with thrombogenesis assays in mice and fish. With machine learning and kinetic and structure-based modeling approaches, one could then identify signatures of NSAID-induced cardiovascular risk and test their predictive efficacy at the individual level prospectively in clinical trials. An extension of this concept is to complement Mendelian randomization for predicting drug efficacy and risk in an individualized approach. New cancer drugs highlight the need to elucidate mechanisms that underlie variability of drug response in therapeutic efficacy as well as in toxicity. Chimeric antigen receptor (CAR)–T cells may cure a life-threatening disease but may also cause a lethal cytokine storm (14). Similarly, checkpoint blockade with programmed cell death–1 inhibitors may restrain, leave unaltered, or advance tumor progression (15).

ILLUSTRATION: TAI11/SHUTTERSTOCK

Developing a suite of evoked phenotypes might improve the screening of newly approved drugs for unanticipated risks and efficacies. In the United States, the Food and Drug Administration could provide a safe harbor for the emergence of such knowledge, much as it did to encourage studies of pharmacogenetics. Too often, understanding the spectrum of drug action—both efficacy and toxicity—is delayed by risk-averse strategies of drug development focused only on the planned initial indication for approval. Consent to detailed phenotyping might also be linked to drugs still in development but under compassionate use, thus accelerating the acquisition of knowledge relevant to mechanism of action, potential toxicity, clinical trial design, and ultimate clinical utility.

The value of large, prospective datasets and associated biobanks to drug development and precision medicine is already apparent. This could be enhanced by the detailed evoked phenotyping that is only possible in relatively smaller numbers of individuals. As the cost of data recording and analysis declines, deeper phenotyping can be applied at scale and the clear distinction between phenotyping approaches is likely to erode.

What is the advantage of labeling this type of research as human phenomic science? After all, phenotyping studies in humans have long existed. However, investigators are few and resources are scattered. Shifting from the detection of large average effects to a more precise approach to medicine requires great investment and skilled investigators (who, in this case, are in short supply). Within the spectrum of clinical research, the naming of “health services research” and “clinical epidemiology” helped to shape these disciplines with defined skill sets, training programs, and collective activities (meetings, departments). The success of this tactic is reflected in the growth of, and investments in, these clinical areas. Giving human phenome–related research a name can serve a similar purpose of defining, maturing, and refining the endeavor. If we want to move from the detection of large average effects of therapies to a more precise approach to medicine, then we ought to consider a strategic effort to recruit investigators and develop infrastructure to further support human phenomic science.

References

  1. www.genomeweb.com/archive/sydney-brenner-urges-cancer-researchers-consider-bedside-bench-approach
  2. www.nationalacademies.org/hmd/Activities/Research/GenomicBasedResearch/Innovation-Collaboratives/Global_Genomic_Medicine_Collaborative.aspx

Navigate This Article