In 2004 an international collaboration known as ENCODE (Encyclopedia of DNA
Elements) set out to address these issues by defining all functional
elements in a representative 1% of the human genome—whether in the
protein-coding regions or in the erstwhile junk. The results were published
in June of 2007 in a summary paper in Nature
and in 23 more
detailed articles in the journal Genome Research.
article, "Identification and analysis of functional
elements in 1% of the human genome," (447: 799-816, 2007) quickly
became a Hot Paper, racking up nearly 300 citations in roughly a year and
half and duly taking up residence in this publication's Top Ten in Biology,
where it currently ranks at #3. In the process, the paper helped catapult
Ewan Birney, a scientist at the European Bioinformatics Institute in
Hinxton, U.K., and first author on the ENCODE paper, into the #9 position
in the current Thomson
ranking of the hottest researchers
in molecular biology & genetics. Birney’s elite selection of 42
-covered papers in the field since 1998, representing just a
portion of his career output of more than 100 reports, has collectively
tallied over 13,000 citations—averaging an amazing 307 citations per
Birney, at 36, is already a veteran of a host of genome projects, from
the human, mice, and rat genomes to the platypus and Anopheles
gambiae, the mosquito that carries malaria. He received his
bachelor’s degree in biochemistry from Balliol College Oxford in
1996. He then spent the next four years working with Richard Durbin at
the Sanger Centre, where he received his Ph.D. in 2000. Since then,
Birney has been a senior scientist at the EBI, where he heads the
Nucleotide Data division.
to Science Watch from his office at the
EBI, just outside Cambridge.
You started young as a PI at the European
Bioinformatics Institute. What was your first major
I co-founded, with Tim Hubbard and Michelle Clamp, a project called
Ensembl, which was and still is one of the major resources for using
genomic information on the web. It presents the human genome to
researchers—it's a major access portal. I’m still one of
the principal investigators for that project and it still dominates my
life quite a lot.
How did ENCODE get started, and what was the
"'Dark matter of the genome'
is a better term than 'junk DNA,'"
says Ewan Birney of the European
Bioinformatics Institute. "It implies
that we don’t know what this stuff
ENCODE was one of a series of projects that followed on the human
genome project. At the end of 2003, people knew the human genome was
going to be finished; they knew the path forward for the mouse and rat
genomes. There was a kind of logical follow-up for other important
organisms. The question was, what else do we need to do to really
enable genomics? One project, for instance, which is kind of orthogonal
to ENCODE, is the HapMap project to discover how variations in the
genome occur in different humans. That’s one thing we need to
know. There are lots and lots of subtle differences. What are they?
Let’s just build a catalog.
The other side was understanding the genome better. Although we have
some reasonable appreciation of protein-coding genes, everything else
was a here-be-dragons kind of thing.
Here be dragons?
You know those old maps, where the known world ends and they just write
"here be dragons." We have these huge expanses of genome and we just
don’t know what’s going on. This is the noncoding DNA, in
the parlance. And we want to know what this stuff is doing.
Is this "junk DNA" we’re talking
Well, the phrase "junk DNA" has morphed over time. Back in the 1970s,
with the discovery of introns, junk DNA implied these large chunks of
DNA that got transcribed into RNA, cut out, and then seemingly thrown
away. As people in the 1980s began to rather painstakingly put together
big chunks of genomic DNA, they started to see the layout of genes in
the genome, and one of the first things they realized was that the
protein-coding stuff doesn’t make for much of it. Even in the
densest part of the genome, it makes up maybe 10%. On average it makes
2%. Then you also find these disperse repeats, these parasitic elements
that are found in every large genome, whether plants, fish, or humans.
They have their own set of specific genome parasites, copying
themselves happily across genomes. About half of our genome comes from
these repeats. The phrase "junk DNA" started to be used interchangeably
to mean these disperse repeats selfishly copying themselves, as well as
all these other parts of the genome that we didn’t understand.
The phrase started to have a life of its own. It’s not a very
scientific term, although many scientists use it.
What phrase do you prefer?
"Dark matter of the genome" is a better term. It implies that we
don’t know what this stuff does.
So back to ENCODE. How was the collaboration
put together, and who decided what techniques to use to analyze
The project was saying, in effect, let’s just throw the kitchen
sink of experimental techniques at this problem of understanding what
the non-coding DNA does. Dream up any kind of useful experiment,
propose it to NHGRI; peer review then says yes or no, and off we go to
basically chart the here-be-dragons part of the genome, to discover
what’s going on in the dark matter. When the project started in
2004, we really didn’t know which experiments were going to work
and which would be too expensive to do in a whole genome, so the pilot
project focused on 1% of the genome, divided up into 44 distinct
regions. That sounds like a small percentage, but it’s a lot of
DNA. It’s like if you want to study the Atlantic Ocean and you
study just 1%, you’ll probably learn a lot about it. Beginning in
2005, a dozen experimental groups were funded to look at this 1%. And
one of the key rules was that everybody had to use the same 1%. That
was critical. If we didn’t do that—if everybody had chosen
their own bit—it would have been a disaster.
How did you end up first author?
Cited Papers by Ewan Birney and
Colleagues, Published Since
by total citations)
Lander, et al.,
"Initial sequencing and
analysis of the human genome,"
R.H. Waterston, et al.,
"Initial sequencing and comparative
analysis of the mouse genome,"
Nature, 420(6915): 520-62,
et al., "The Pfam
protein familes database,"
Nucl. Acids Res.,
30(1): 276-80, 2002.
A. Bateman, et al., "The
Pfam protein families database,"
Nucl. Acids Res., 28(1):
et al., "Comparative
genomics of the eukaryotes,"
I am not first author—the first author is the "ENCODE Project
Consortium." I am first in the list of equals after that. About halfway
through the project—and this is typical in biology these
days—the real headaches shift from the experiments to the
bioinformatics. Once you collect the data, you have to aggregate it and
store it sensibly. That’s mostly boring plumbing and engineering;
it’s very tedious, but you have to get it right. Then the far
more interesting aspect of understanding the data—which again is
mainly bioinformatics—starts. I was originally funded to do a
very small bit in ENCODE, but when everybody else assumed that the
integration of the data would just magically happen, some of us from
different laboratories essentially put our hands up and said, yes, we
will dedicate our own effort to making this work. I was one of them. It
turned into a pretty painful two years of work, and by the end, I was
the first amongst equals in the author list, though many, many people
contributed to the paper. The thing to stress is that the paper did
have 308 authors.
So what did ENCODE tell us about the nature of
ENCODE didn’t have the neatest take-home message, although we did
have bullet points in the Nature paper explaining what we had
learned. When I discussed all this with a U.K. science journalist, he
ended up using the phrase "boffins are baffled," which about captures
The major thing we learned is that the genome is a very complex place,
including all these here-be-dragons areas, this dark matter. It’s
complex and multilayered. We certainly removed the idea that the genome
is a simple thing, with very discrete units of genes and discrete
regulatory information not in the genes, all very neatly packaged. When
you look at the ENCODE data, it doesn’t hold up. Introns of many
genes, for instance, are just alive with regulatory information. One of
the more challenging things we discovered is that we see lots more RNA
being transcribed than we ever expected, and we really don’t have
a proper way of understanding and classifying this RNA. So RNA is being
transcribed not just in the traditional way, into messenger RNA that
makes proteins, but it's also happening in bits between genes and,
weirder still, it’s crossing between gene boundaries, going from
one region that we’re pretty sure is not involved with protein
coding, and then going into a gene or out of a gene.
Can you describe that in a little more
Some RNA transcripts start outside a gene and then go into the intron
of another gene, but they don’t actually seem to make an exon.
Rather they overlap with the exon of that gene. Some transcripts go the
other way: they start inside the intron of one gene, go through that
gene a couple of introns, and then stop. Even more amazing, some
transcripts start in the intron of one gene and go for one megabase,
crossing four or five other genes, and then somehow, in a way we
don’t understand, start making exons in another gene. They join
two genes which have four genes in between them at a very long genomic
Any chance this is being interpreted
When I started, I was one of the real arch skeptics of this RNA data. I
now firmly believe that these RNAs exist. But what they do, if
anything, is something I can get into very long debates about. So this
will lead into what I think is the most interesting discovery from
Well, the genome is alive with what are called DNaseI hypersensitive
sites, which are sites on the genome that bind transcription factors.
This was not a surprise. It was completely expected that these sites
are all over the genome, even in lots of places that didn’t have
a gene. The surprise is that only about half the sequences under these
DNaseI hypersensitive regions are conserved across mammals. The other
half are not. The weird RNA described above is even less conserved than
this. That’s pretty odd, because, for protein-coding genes, maybe
95% are conserved. So this is really weird. We just didn’t expect
this. What on earth is going on?
So what on earth is going on?
What I believe, and what we put this in the paper as the last of the
bullet points, is that, in fact, many of these elements arise by
chance, and once having done so, they’re neither good nor bad for
the organism. Then they just hang around until, by chance again, they
disappear. So this is a very interesting idea, that there is this big
pool of neutral stuff that’s biochemically active but not
actually important now for the organism.
If you looked again in 10 million years, the
50% that’s not conserved would look entirely
That would be the logic. Interestingly enough, very similar results
have been seen in Drosophila. One interpretation of this is
that maybe biology has tuned itself to create a pool of elements that
could be used by evolution in the future. The trouble with that
hypothesis, and the reason why theoreticians of evolutionary biology
will poo-poo it, is that evolution isn’t supposed to have
foresight. It can’t say, for example, that we'd better get
evolving these opposable thumbs because we might need them in the
future to hold a telephone. Evolution works on the instantaneous here
and now. On the other hand, the counter argument to this is that if you
take the hard line, you can’t justify sex. You can’t
justify why an individual would throw away half its genes to combine
with some other individual’s half. Surely that’s a bad idea
from the gene's perspective, to roll the dice every time whether it
will make it to the next generation. Surely it’s not a good thing
for the gene. We hint at this idea in the paper, again without getting
stuck in it. But for me, this is the most interesting thing that has
come out of ENCODE.
When people wrote about ENCODE, most of them picked up on the idea that
we are redefining the word "gene," what with the RNA running in and out
of genes. That was the easiest thing to understand and put across. This
evolutionary stuff is genuinely harder to get your head around, but,
from my point of view, it’s the most important thing.
What can we expect from ENCODE in the
It’s being scaled up to cover 100% of the genome. The grants were
awarded last year. One of the great things about genomics is that it
will cost about the same to do this, perhaps even a little less, than
it did to do 1% a few years ago. And we’ll do it twice as fast.
The reason is we now have all these ultra-high-throughput sequence
machines. That’s just completely changed the game and made these
experiments really cost effective.
Birney talks with ScienceWatch.com and answers a few
questions about his Fast Moving Front in the field of Computer Science
in July 2006.
Keywords: Ewan Birney, EBI, European Bioinformatics Institute, ENCODE,
junk DNA, introns, RNA transcription, DNaseI, Pfam, Genomics.