• Support
  • Contact Us
  • Corporate website
  • Customer Care
  • Training

  • ScienceWatch Home
  • Inside This Month...
  • Interviews

Featured Interviews
Author Commentaries
Institutional Interviews
Journal Interviews
Podcasts

  • Analyses

Featured Analyses
What's Hot In...
Special Topics

  • Data & Rankings

Sci-Bytes
Fast Breaking Papers
New Hot Papers
Emerging Research Fronts
Fast Moving Fronts
Corporate Research Fronts
Research Front Maps
Current Classics
Top Topics
Rising Stars
New Entrants
Country Profiles

  • About Science Watch

Methodology
Archives
Contact Us
RSS Feeds

 ScienceWatch

2008 : April 2008 - Author Commentaries : Dr. Garry Nolan

AUTHOR COMMENTARIES - 2008

April 2008

Nolan Dr. Garry Nolan
A Featured Scientist from Essential Science IndicatorsSM

According to the December 2007 report of new entrants for Essential Science IndicatorsSM from Thomson Scientific, the work of Garry Nolan entered the top 1% in the field of Molecular Biology & Genetics, with 16 papers cited a total of 1,347 times. Dr. Nolan also has Highly Cited Papers in the fields of Clinical Medicine and Biology & Biochemistry.

Dr. Nolan received his BS degree from Cornell University and his Ph.D. from Stanford University. He also did postdoctoral work at MIT and Rockefeller University.


At present, he is an Associate Professor in the Department of Molecular Pharmacology and the Department of Microbiology and Immunology at Stanford University School of Medicine. He is also a member of Bio-X and the Stanford Cancer Center, as well as the Director of the Stanford NHLBI Proteomics Center. He is on the editorial boards for the journals Gene Therapy & Molecular Biology, Cells to Genes, and Molecular Therapy.

In the interview below, Dr. Nolan talks with ScienceWatch.com correspondent Gary Taubes about his highly cited work.

One of your most-cited papers of the last decade is your 1999 Trends in Genetics paper on NF-Kappa B (Foo SY, Nolan GP, "NF-Kappa B to the rescue—RELs, apoptosis, and cellular transformation," 15[6]: 229-35, June 1999). What prompted you to write that paper and why is it so highly cited?

That paper was written at the tail end of my work in that area. It really harkens back to much earlier research that I did in David Baltimore’s lab at the beginning of my career. It’s just a review of the field, which is probably why it gets cited so frequently. It says, look, this is what NF-kappa B can do, and these are the important places where it plays a role. It was a review that for the first time summed up a more general role for NF-kappa B molecules in a manner that placed this signaling system in the center of a larger web of physiologic processes. But I'd say that early work led to our more recent research on cell profiling, which is gaining recognition in several fields, especially related to clinical signaling measurements, and is allowing us to look at cells in a broader context.

Okay, let’s fill in the gaps, though. What did you do after you worked on NF-Kappa B at the Baltimore lab?

I developed an approach for delivering genes to cells that used what are called retroviral transfer vectors. That was the beginning of my gene transfer career and it led to many of these papers that are now highly cited, and the essential elements of that work are embodied in a lot of retroviral and lentiviral work being done in many labs and clinical settings. My first company, started in the mid-1990s, was built around the notion that the only purpose of viruses is to make more copies of themselves. This is the selfish gene hypothesis of Richard Dawkins. That is, viruses don’t care about the cell or the state of the cell they’re invading. They only care about proliferating and whether the target cell can support such replication.

My idea was to say, "Let’s reverse this process and make it to the benefit of the virus to make the cell healthier." We could engineer an evolutionary bottle for the virus, and make the virus understand that there will be more copies of it, but only if it takes a sick cell and makes it healthy. That would force the virus to basically search through evolutionary space for incremental advances to itself to correct the biology of the cell. And that approach has worked. The company we started, Rigel, has been very successful and that early work has led the company to develop a number of drugs in clinical trials for cancer and immunotherapy, with a very successful phase II trial of an orally available drug for rheumatoid arthritis.

And that work led to the idea of cell profiling?

We had the notion that an evolutionary bottle required a readout at the single-cell level. If the virus is doing something to the cell to make it healthier, we want to know what that is. So the point is that unhealthy cells have unhealthy signaling networks. Think of it like a normal individual versus a psychotic individual—a normal mental state versus an abnormal state.

"If the virus is doing something to the cell to make it healthier, we want to know what that is."

We use kinases and their actions as a measure of the state of the cell's information processing system. The action of a kinase is to phosphorylate a protein, and this process basically leaves a breadcrumb trail of cellular information telling you what these kinases are doing. Those phosphorylation sites on proteins are indicative of the relative functionality of individual proteins and their capabilities. So if you measure the status of many of these phosphorylated proteins within the cell, what you’re really measuring is the history of these kinases and the signaling action in the cell. But, as it turns out, you’re also measuring the future potential of that cell.

Basically you can think of these kinases as switches; when you flow information through the cell, the settings of these switches change, and that tells you what environment that information can encounter and how the cell might respond to such environments. The information will go down and through certain conduits based on these phosphorylation settings. So you get both the history of the cell and a prediction of the future at the same time by measuring these phosphorylation events if you know how to interpret them.

When you think of it that way, you realize, "Gee, if I want to effect a correct outcome in this cell, I better measure key indicators in the cell before I do it." For instance, for inflammation—NF-kappa B being a key indicator of inflammation—I should be measuring the phosphorylation of certain proteins in the NF-kappa B signaling cascade. These tell us, say, whether a cell is being continuously and inappropriately activated and here’s a phosphorylation site that’s a measure of that inappropriate activity. Now I want to find a drug or, in my case, a virus, that turns that off. If one of my viruses (or drugs) makes a cell healthier, we want to know, what was the biochemical event that changed for the better? That led us to the idea that since signaling is very important for cellular biology, how can we measure signaling at a single-cell level?

You can see the confluence of ideas here. If I can measure one of these effects, maybe two will be even more informative? What about three, five, six, 11, 15, or just as many as we can measure? Maybe we can get an angle on what this cell is doing. You’ve seen these signaling diagrams just filled with phosphoepitope tags. People might want to measure an estimated 40,000 of them in a cell. You can imagine them being like the blinking lights on Sulu’s console in Star Trek. If a bunch are flashing red, you know something is wrong. What we have shown is that you don't need to measure them all—that's information overload—but that a subset of those proteins, when linked together in groups of networks and subnetworks revealed through simulation, are incredibly informative as to the posture of a cell's current thinking and future capabilities.

So we wanted to measure these events in such groups. We went back to the flow cytometer, developed dye and laser systems that enable us to interrogate the algorithmic processes of cells at a single-cell level, to ask them questions that reveal how information is passed through these signaling conduits. Then we built up a profile of that information, and it turns out that certain profiles say, "I will respond to some drug"—chemotherapy, for instance—or, "I won’t respond." And if I know that information from interrogating cells in a patient’s blood, if I know that ahead of time, I know whether it's worth giving that drug or not.

We’ve been developing that system into clinical diagnostics. We’ve been proving to people that what we’re measuring is actually right; proving that if you give me 50 patients and I’m blind to their outcome, I can take their blood and basically say who will or will not respond to a particular drug. I started another company recently to really scale that up.

Can you describe in a little more detail how exactly go about interrogating these cells? How does this system work?

Basically what we do is take cells and stimulate them with cytokines or antibodies or drugs. Then we wait a certain amount of time and essentially freeze the signaling network. We add a fixative—paraformaldehyde. Very simple. When you’re dead and they want to fix your body—embalm it—they pump you full of formaldehyde and that basically cross-links all the proteins and prevents decomposition. So what we’re doing here is similar. We’re basically freeze-framing the cell, which then allows us to process it further. We can then poke holes in the cell, allowing us to get antibodies into it, and these are specific for particular phosphoepitopes on proteins. Each of those antibodies is tagged with a little fluorescent agent. We have between 11 or 15 of them. Each is colored differently.

Then we go to this machine called a fluorescence-activated cell sorter, which was invented by Len and Lee Herzenberg. I studied with them when I was a graduate student at Stanford. That machine essentially takes those cells—say, a million of them—lines them all up in a row, and spits them through a nozzle at a rate of about 50,000 cells per second. As the cells come out in that stream, they pass a series of lasers, and each laser activates these flourophores, excites them; they emit different colors, depending upon the amount of antibody activity in each cell. In other words, each cell will light up like a Christmas tree in a different way. A photodetector measures the number of photons coming off at each wavelength, and that information is digitized. That tells us how much of the target protein is in each cell.

Then we take that information and process it, learning to recognize what a healthy cell looks like, and what a cell looks like when it's, say, sitting in the middle of a tumor, or how a tumor cell differs from a normal cell. So not only can we measure what’s happening in the middle of a tumor, but we can measure how the tumor is instructing the immune system to do the wrong thing, and mechanistically how it's doing that. And we can determine whether a given tumor will react appropriately to a drug, or resist the drug's action.

And this method of interrogating cells was what you described in the 2005 Science paper (Sachs K, et al., "Causal protein-signaling networks derived from multiparameter single-cell data," 308[5721]: 523-9, 22 April 2005) and the 2004 Cell paper (Irish JM, et al., "Single cell profiling of potentiated phospho-protein networks in cancer cells," 118[2]: 217-28, 23 July 2004)?

Right. There were important methodological and immunologic papers prior to those, but the proof of the clinical and signaling biology reconstruction concepts were really in the Science and Cell papers.

Was there an element of serendipity to this work?

Well, the mathematics we’re using is 200 or 300 years old but it’s only recently that it’s been readily usable for this kind of problem with the advent of certain kinds of computer systems. It just turned out that one of my graduate students, Omar Perez (who developed many of the initial concepts around phospho-flow in my lab), was standing at a conference next to Karen Sachs, who was also a graduate student studying with Doug Lauffenberger, an MIT biostatistician. I think they each had a poster and they got to talking and that’s how we all ended up collaborating and making this work. That was certainly serendipitous.

A similar thing happened with our Cell paper. Again one of my graduate students was standing next to a researcher from Norway who had once applied to my lab for a postdoc. Now he was a professor and they got to talking and they kind of cooked up a project to measure phosphorylation in cancer cells. At the time, we didn’t understand that it would lead to this idea of patient stratification or the interrogation of single cells.

More recently, another student in the group, Peter Krutzik, developed ways to use the approach for drug screening in an extremely high throughput manner. His work has been featured as cover articles in both Nature Methods and Nature Chemical Biology in the past year alone.

If you were to play devil’s advocate, what would you tell me about why this cell profiling idea might fail to live up to your expectations? In other words, what could go wrong?

"So not only can we measure what’s happening in the middle of a tumor, but we can measure how the tumor is instructing the immune system to do the wrong thing, and mechanistically how it's doing that."

Well, we could just be flat-out wrong and completely misinterpreting our results. But we’ve done all the statistical tests so it’s hard to imagine that’s the case, and the technology has now been replicated in several other well-known labs. At one level—and this is a national problem in biomedicine—we might not be able to get access to the kinds of antibodies we need—that is, the specific reagents that allow one to visualize certain molecular events. But this is more a technical limitation and not a conceptual barrier. There might be intellectual property patents that we don’t even know about that could prevent us having access to the key antibodies, key reagents, etc., but there are workarounds for that. But those are problems that might delay a company or clinical goals, not the academic pursuit itself and certainly not to the degree that the technology won't move forward.

It’s hard to think that anything there could go wrong, because it’s all gone right so far—in five or six different clinical disease conditions we've tested and numerous others I've seen, or papers I have reviewed and I know are in press from other groups. And we have raised enough money to pursue this industrially, provided by some premier venture capital groups who did considerable due diligence on the approach to the point that they are believers as well. So if the project goes awry, it would have to be just a failure to execute, not a failure of the clinical biology.

Where do you think your technology will be in five years?

Commercially, we’ll probably have three or four diagnostics on the market that will be able to detect rare cells in patients that are about to become aggressively cancerous—in leukemias and lymphomas first off. We may have perhaps one or two diagnostics to say which drugs should be used with such patients, and these will provide a way of funneling patients to the right drug. With a little bit of extra effort, this same technology can be used on solid tumors, so that may be happening as well. We’ve demonstrated that in my academic laboratory already.

In fact, people are now beginning to use these as pharmacodynamic monitoring assays in clinical trials, and once they look at it, everyone says to me, "Why didn’t we think of it, it’s so easy to use and seems so obvious." But of course at the time, all I heard was that it wouldn’t work.

Why were people so skeptical? Why did they think it wouldn’t work?

No one thought it would be sensitive enough to do what we were claiming. And they did not think it would be quantitative enough. To me, these were technical limitations that could be proven. The key issue was getting people to believe the conceptual advances the techniques could prove or enable.

What was the most challenging part of this research?

Getting it commercialized; that was the hardest part. In this case, not just finding the right people to work on it, but getting financial support. I spent probably two to three years tromping around, being laughed out of venture capital offices, being told nobody would ever make a penny on this, that it was useless.

To me at the time it was mind-boggling that people did not see the advantages: if you could tell who gets a benefit from a drug ahead of time and who doesn’t, if you have a way to get value from drugs that might otherwise be thrown away, why not do it? I was ready to throw in the towel when a family connection got me to Kleiner Perkins and Texas Pacific Group. They had started a couple of companies in the diagnostic area and they looked at me and said, "This should work." They believed what I believed.

Luckily, now, many groups are moving in this area, and it's become more than just a cottage industry driven by my lab alone, which is gratifying.

What do you mean by getting value out of drugs that would otherwise be thrown away?

The notion is that there are tons of drugs out there that work at a 10% rate: 10% show complete remission from cancer, for instance, and that’s great, but another 10% have a really hard time on the drug and for 80% it does nothing. Since we already have a drug out there—say, standard chemotherapy—which works on 40% of the patients, there’s an ethics of standard care that says that’s what you have to use. Even if 10% of your patients will get much better on this other drug, you don’t know who they are, so what benefit is it to these patients to put them through your drug first? You have to do the 40% drug, even if that drug is practically nuking the patient, first before you have a reason to allow the second drug. At the point at which the patient has failed the first drug regimen, the cancer might have advanced to a stage that if detrimental to the patient, or the first drug regimen (by nature of its inherent toxicity) might have compromised the health of the patient to the point that the new drug cannot work.

So there are a lot of drugs out there, even some that have virtually no side effects, that don’t gain approval because their success rate isn’t better than what’s already available—even if the already available drug is highly toxic. Ironically, the standard of care ethics doesn't always balance the equations correctly. And the large pharmaceutical companies don’t consider a 10% drug worth the effort even if they were able to overcome the ethical limitations. But what if you know in advance who the drug will work on and who it won’t? If you could pre-select the population of patients that would respond, now you have a market for the drug. If you can find those individuals ahead of time, the pharmaceutical companies, the insurers, and, most importantly, the patients can do very well.

Garry P. Nolan, Ph.D.
Stanford University School of Medicine
Stanford, CA, USA

back to top


2008 : April 2008 - Author Commentaries : Dr. Garry Nolan

  • © 2020 Clarivate
  • Careers
  • Copyright
  • Terms of Use
  • Privacy Policy
  • Cookie Policy
Follow us Share to Twitter Share to LinkedIn Share to Facebook Share to Instagram
Previous
left arrow key
Next
right arrow key
Close Move