Chado Tutorial 2011
Chado is the database schema of the GMOD project. This session introduces database concepts, and then provides an overview of Chado's design and architecture, and then goes into detail about how to use a Chado database.
- 1 Theory
- 1.1 Introduction
- 1.2 Why Chado?
- 1.3 Chado Architecture: Modules
- 1.4 Exploring the schema
- 1.4.1 Sequence Module
- 1.4.2 CV (Controlled Vocabularies) Module
- 1.4.3 Opening our sample database
- 1.4.4 Our first example query
- 1.4.5 General Module
- 1.4.6 Properties
- 2 Practice
- 2.1 Prerequisites
- 2.2 Installing GMOD
- 2.3 Preparing GFF data for loading
- 2.4 Loading other data
- 3 Chado for Expression, Genotype, Phenotype, and Natural Diversity
- 3.1 Expression
- 3.2 Genotype
- 3.3 Environment
- 3.4 Phenotype, Natural Diversity and Atlas Support
- 4 Resources
Or, Six years of school in 15 minutes or less.
What's a database?
- Chado is a schema, a database design - a blueprint for a database containing genomic data
- Distinct from
SQL is a standardized query language for defining and manipulating databases. Chado uses it. SQL is supported by all major DBMSs.
FlyBase Field Mapping Tables shows some example SQL that queries the FlyBase Chado database. (Caveat: FlyBase sometimes uses Chado in ways that no other organizations do.)
Will SQL be on the test?
No, we aren't going to teach in-depth SQL in this course but we will use it in examples and show how to write queries in Chado.
You can do basics with Chado without knowing SQL. Many common tasks already have scripts written for them. However, as you get more into using Chado, you will find that a working knowledge of SQL is necessary.
- Supports many types of data, integrates with many tools
- Use only what you need, ignore the rest
- Write your own modules and properties
- Widely used
- - Chado started here, large diverse dataset and organization
- Xenbase - Smaller, but with several IT staff
- ParameciumDB - Smaller still, complete GMOD shop, including Chado
- IGS - Large-scale annotation/comparative data in Chado, more than a dozen active developers
- Plus AphidBase, BeeBase, BeetleBase, BovineBase, ...
- Great Community of Support
Chado Architecture: Modules
The Chado schema is built with a set of modules. A Chado module is a set of database tables and relationships that stores information about a well-defined area of biology, such as sequence or attribution.
(Also available as a PowerPoint animation)
Arrows are dependencies between modules. Dependencies indicate one or more foreign keys linking modules.
- General - Identifying things within the DB to the outside world, and identifying things from other databases.
- Controlled Vocabulary (cv) - Controlled vocabularies and ontologies
- Publication (pub) - Publications and attribution
- Organism - Describes species; pretty simple. Phylogeny module stores relationships.
- Sequence - Genomic features and things that can be tied to or descend from genomic features.
- Map - Maps without sequence
- Genetic - Genetic data and genotypes
- Companalysis - Storage of Computational sequence analysis. The key concept is that the results of a computational analysis can be interpreted or described as a sequence feature.
These modules have been contributed to Chado by users who developed them.
- Mage - Microarray data
- Stock - Specimens and biological collections
- Natural Diversity - geolocation, phenotype, genotype (in development)
- Plus property tables in many modules.
- Audit - Database audit trail
- Expression - Summaries of RNA and protein expression
- Library - Descriptions of molecular libraries
- Phenotype - Phenotypic data
- Phylogeny - Organisms and phylogenetic trees
All modules are blessed, but some modules are more blessed than others.
The General, CV, Publication, Organism, Sequence and Companalysis modules are all widely used and cleanly designed. After that modules become less frequently used (Stock, Expression, Phenotype, Mage). Also several modules are not as cleanly separated as we would like them to be. Phenotypic data is spread over several modules. Organism and Phylogeny overlap. CMap is all about maps, but it does not use the Map module.
From Jeff Bowes, at XenBase:
As for Chado, we are more Chadoish than exactly Chado. We use the core modules with few changes - feature, cv, general, analysis. Although I prefer to add columns to tables when it is reasonable and limit the use of property tables (too many left outer joins). We use a slightly modified version of the phylogeny module. We have developed completely different modules for community, literature, anatomy and gene expression. If there is a PATO compatible Chado Phenotype solution we'd prefer to go with that. Although, it might cause problems that we have a separate anatomy module as opposed to using cvterm to store anatomy.
In other words the ideal is good, but implementation and usage is uneven. See the 2008 GMOD Community survey for what gets used.
Exploring the schema
Rather than simply listing out the modules and what is stored in them, we'll take a data-centric view and imagine what we want to store in our database, then learn the Chado way of storing it.
During this course you'll be working with genome annotation data from MAKER. We'll simplify this and start by considering that we have annotation on chromosomes that we want to store in our database. These are the sort of things we want to store:
- gene predictions
- BLAST matches
You may have worked with databases in the past where each type of thing you want to store is given its own table. That is, we'd have a table for genes, one for chromosomes, tRNAs, etc. The problem with this sort of design is that, as you encounter new types of things, you have to create new tables to store them. Also, many of these 'thing' tables are going to look very much alike.
Chado is what is a generic schema which, in effect, means that data are abstracted wherever possible to prevent duplication in both the design and data itself. So, instead of one table for each type of 'thing', we just have one table to hold 'things', regardless of their types. In the Chado world these are known as 'features'.
This brings us to the Sequence Module, which contains the central feature table.
The sequence module is used to manage genomic features.
Chado defines a feature to be a region of a biological polymer (typically a DNA, RNA, or a polypeptide molecule) or an aggregate of regions on this polymer. A region can be an entire chromosome, or a junction between two bases. Features are typed according to the Sequence Ontology (SO), they can be localized relative to other features, and they can form part-whole and other relationships with other features.
Features are stored in the feature table.
Within this feature table we can store all types of features and keep track of their type with the type_id field. It's conceivable to store the named value of each type in this field, like 'gene', 'tRNA', etc. but this would be prone to things like spelling errors, not to mention disagreement of the definition of some of these terms.
So solve this, all features are linked to a specific type in a controlled vocabulary or ontology. These are stored in the cv module.
CV (Controlled Vocabularies) Module
The CV module implements controlled vocabularies and their more complex cousins, ontologies.
A controlled vocabulary (CV) is a list of terms from which a value must come. CVs are widely used in all databases, not just biological ones. Pull down menus are often used to present CVs to users in query or annotation interfaces.
|ZFIN's Assay Type CV|
Controlled vocabularies are simple lists of terms. Ontologies are terms plus rules and relationships betwen the terms. The Gene Ontology (GO) and Sequence Ontology (SO) are the two best known ontologies, but there are many more available from OBO.
Ontologies can be incredibly complex with many relationships between terms. Representing them and reasoning with them is non-trivial, but the CV module helps with both.
|FlyBase CV Term Viewer showing GO term "tissue regeneration"|
CVs and Ontologies in Chado
(See the CVTerm table referencing table list.)
Every other module depends on on the CV module. CVs and ontologies are central to Chado's design philosophy. Why?
Using CVs (and enforcing their use as Chado does) ensures that your data stays consistent. For example, in the most simple case it prevents your database from using several different values all to mean the same thing (e.g., "unknown", "unspecified", "missing", "other", " ",...), and it prevents misspellings ("sagital" instead of "sagittal") and typos.
Data Portability and Standardization
If you are studying developmental processes and you use the Gene Ontology's biological process terms then your data can be easily shared and integrated with data from other researchers. If you create your set of terms or just enter free text (egads!), then it will require a lot of human intervention to convert your data to a standard nomenclature so it can be integrated with others.
Using an established ontology when one exists usually involves some compromises, but it greatly increases the usability of your data to others (and to yourself).
Controlled vocabularies are not particularly complex - they are just lists of terms. Ontologies, however, can be very complex, as shown by the GO example above. This complexity could be ignored. You could, for example, convert GO to a controlled vocabulary - a very long list of terms. You would still have data integrity and portability, and it wouldn't be as complex.
It also would not be as powerful. Ontologies support reasoning about the terms in them and this can be very useful. With GO, for example, you can ask
Show me all genes involved in anatomical structure development
and get back genes directly tagged with anatomical structure development, plus any genes tagged with any of that term's sub-terms, from organ development to regulation of skeletal muscle regeneration. If you convert GO to just a list of terms, you can no longer answer that question.
The Chado CV Module supports such complex queries with ontologies by pre-calculating the transitive closure of all terms in an ontology. There is a great explanation of transitive closure on the Chado CV Module page. Also see the description of these 3 tables:
We won't go into any more detail on it here.
Opening our sample database
$ psql chado
Our first example query
Up to this point we've seen how to store features using the feature table as well as rigidly define what types of things they are using the cv module tables. Here is an SQL example of how to query some very basic information about all gene features in our database: <sql> SELECT gene.feature_id, gene.uniquename, gene.name
FROM feature gene JOIN cvterm c ON gene.type_id = c.cvterm_id WHERE c.name = 'gene';</sql>
This should return something like:
feature_id | uniquename | name ------------+------------------------------------------------+------------------------------------------------ 405 | maker-scf1117875582023-snap-gene-0.0 | maker-scf1117875582023-snap-gene-0.0 409 | maker-scf1117875582023-snap-gene-0.3 | maker-scf1117875582023-snap-gene-0.3 415 | genemark-scf1117875582023-abinit-gene-0.42 | genemark-scf1117875582023-abinit-gene-0.42 1011 | maker-scf1117875582023-snap-gene-1.0 | maker-scf1117875582023-snap-gene-1.0 1018 | maker-scf1117875582023-snap-gene-1.4 | maker-scf1117875582023-snap-gene-1.4 1022 | maker-scf1117875582023-snap-gene-1.1 | maker-scf1117875582023-snap-gene-1.1 1027 | maker-scf1117875582023-snap-gene-1.2 | maker-scf1117875582023-snap-gene-1.2 1032 | maker-scf1117875582023-snap-gene-1.7 | maker-scf1117875582023-snap-gene-1.7 1038 | maker-scf1117875582023-snap-gene-1.5 | maker-scf1117875582023-snap-gene-1.5 1698 | snap_masked-scf1117875582023-abinit-gene-2.4 | snap_masked-scf1117875582023-abinit-gene-2.4 ...
Type q to escape the listing.
Lists of things are great, but we're going to need to do a lot more with our genomic data than keep lists of features. First, we need to be able to identify them properly. This may seem straightforward, but creating one column per ID type in feature would be a bad idea, since any given feature could have dozens of different identifiers from different data sources. The General Module helps resolve this.
The General module is about identifying things within this DB to the outside world, and identifying things from the outside world (i.e., other databases) within this database.
Biological databases have public and private IDs and they are usually different things.
These are shown on web pages and in publications. These are also known as accession numbers.
|GO + 0043565 = GO:0043565|
|InterPro + IPR001356 = InterPro:IPR001356|
|YourDB + whatever = YourDB:whatever|
Public IDs tend to be alternate keys inside the database: they do uniquely identify objects in the database.
These are used inside the database and are not meant to be shown or published. Tend to be long integers. There are many more private IDs than public IDs.
Private IDs are used for primary keys and foreign keys.
Most DBMSs have built-in mechanisms for generating private IDs.
IDs in Chado
The General module defines public IDs of
- items defined in this databases, and
- items defined in other databases, that are used or referenced in this database.
In fact, those two classes of IDs are defined in exactly the same way, in the dbxref table.
In Chado every table (in every module) defines its own private IDs.
So far we've only seen very basic information for each feature stored. Because the feature table was designed to be very generic and store all feature types, attributes specific to only some types can't be stored there. It wouldn't make sense, for example, to have a column called 'gene_product_name', since that column would be empty for all the features, like chromosomes, that aren't gene products.
These feature-specific attributes are known as 'properties' of a feature in Chado. They are stored in a table called featureprop.
You may have noticed that the featureprop table shares a 'type_id' column with the feature table. Properties of features are typed according to a controlled vocabulary just as the features themselves are. This helps to ensure that anyone using the same vocabularies are encoding their property assertions in the same way.
Using feature properties we can now describe our features as richly as needed, but our features are still independent of one another. Because many of the features we're storing have some sort of relationship to one another (such as genes and their polypeptide end-products), our schema needs to accommodate this.
Relationships between features are stored in the feature_relationship table.
Features can be arranged in graphs, e.g. "exon part_of transcript part_of gene"; If type is thought of as a verb, then each arc or edge makes a statement:
- Subject verb Object, or
- Child verb Parent, or
- Contained verb Container, or
- Subfeature verb Feature
Again, notice the use of controlled vocabularies (type_id) to define the relationship between features.
One might think of a relationship between a gene and chromosome as 'located_on', and store that as a feature_relationship entry between the two. If you did, pat yourself on the back for Chado-ish thinking, but there's a better way to handle locatable features, since a relationship entry alone wouldn't actually say WHERE the feature was located, only that it was.
Location describes where a feature is/comes from relative to another feature. Some features such as chromosomes are not localized in sequence coordinates, though contigs/assemblies which make up the chromosomes could be.
Locations are stored in the featureloc table, and a feature can have zero or more featureloc records. Features will have either
- one featureloc record, for localized features for which the location is known, or
- zero featureloc records, for unlocalized features such as chromosomes, or for features for which the location is not yet known, such as a gene discovered using classical genetics techniques.
- Features with multiple featurelocs are explained below.
(For a good explanation of how features are located in Chado see Feature Locations. This explanation is excerpted from that.)
This is covered in more detail on the GMOD web site.
A featureloc record specifies an interval in interbase sequence coordinates, bounded by the fmin and fmax columns, each representing the lower and upper linear position of the boundary between bases or base pairs (with directionality indicated by the strand column).
Interbase coordinates were chosen over the base-oriented coordinate system because the math is easier, and it cleanly supports zero-length features such as splice sites and insertion points.
Chado supports location chains. For example, locating an exon relative to a contig that is itself localized relative to a chromosome. The majority of Chado instances will not require this flexibility; features are typically located relative to chromosomes or chromosomes arms.
The ability to store such localization networks or location graphs can be useful for unfinished genomes or parts of genomes such as heterochromatin, in which it is desirable to locate features relative to stable contigs or scaffolds, which are themselves localized in an unstable assembly to chromosomes or chromosome arms.
Localization chains do not necessarily only span assemblies - protein domains may be localized relative to polypeptide features, themselves localized to a transcript (or to the genome, as is more common). Chains may also span sequence alignments.
Feature location information is stored in the featureloc table.
Note: This example and some of the figures are extracted from A Chado case study: an ontology-based modular schema for representing genome-associated biological information, by Christopher J. Mungall, David B. Emmert, and the FlyBase Consortium (2007)
How is a "central dogma" gene represented in Chado?
How do we represent these exons, mRNAs, proteins, and there relationships between them?
Example: Computational Analysis
Note: This is example is based on an example by Scott Cain from an earlier Chado workshop.
You can store the results of computational analysis such as BLAST or BLAT runs in the sequence module. Here's an example of you could store a BLAST highest scoring pair (HSP) result.
With HSP results there are two reference sequences, which means two entries in the featureloc table. (The analysis and analysisfeature tables, in the Companalysis module, are used to store information about how the analysis was done and what scores resulted.)
Every horizontal line becomes a record in the feature table, and every vertical line becomes a record in the feature_relationship table.
Other Feature Annotations
Link to any feature via feature_id:
- GO terms in feature_cvterm
- DB links in feature_dbxref
- Miscellaneous features in featureprop
- Attribution in feature_pub
Extending Chado: Properties tables and new modules
Chado was built to be easily extensible. New functionality can be added by adding new modules. Both the Mage and the Stock modules were added after Chado had been out for a while. They both addressed needs that were either not addressed, or were inadequately addressed in the original release.
Chado can be tailored to an individual organization's needs by using property tables. Property tables are a means to virtually add new columns without having to modify the schema. Property tables are included in many modules.
Property tables are incredibly flexible, and they do make Chado extremely extensible, but they do so at a cost.
- The SQL to get attributes from property tables is much more complex, and
- It is more work to enforce constraints on data in properties tables then in regular columns.
Using data from MAKER.
Already installed PostgreSQL 8.4 via apt-get.
Edit config files
sudo bash cd /etc/postgresql/8.4/main/ less pg_hba.conf
At the bottom of the file, we've already changed ident sameuser to trust. This means that anyone who is on the local machine is allowed to connect to any database. If you want to allow people to connect from other machines, their IP address or a combo of IP address and netmasks can be used to allow remote access.
Now (re)start the database server:
Create a gmod user
- This has already been done on the image.
First, switch to the postgres user (it was created during the PostgreSQL package install):
su - postgres createuser gmod Shall the new role be a superuser? (y/n) y exit # to leave postgres user shell exit # to leave root shell
Installing DBIx::DBStag by hand
In the previous course, this was installed by hand, but in this server it is installed already. It is moderately tricky, because it is difficult to install by CPAN but is easy to install "by hand."
Version 1.6 already installed.
Was installed during the OS install.
in ~/.profile add:
GMOD_ROOT='/usr/local/gmod' export GMOD_ROOT
and source the profile:
cd ~/Documents/Software/schema/chado/ perl Makefile.PL
Use values in '/home/gmod/Documents/Software/schema/chado/build.conf'? [Y] n
This allows you to specify Pythium when you are asked for a default organism.
> "Pythium" for the default organism
Just hit enter when it asks for a password for user gmod.
This Makefile.PL does quite a bit of stuff to get the installation ready. Among other things it:
- Sets up a variety of database user parameters
- Prepares the SQL files for installation
- Copies GMODTools programs from another part of the Subversion repository (this is done for you when you get a gmod release)
- It can rebuild Class::DBI api for Chado, but isn't now (they are prebuilt for the default schema)
- All the 'normal' stuff that would happen when running Makefile.PL, like copying lib and bin files to be ready for installation
- Warn you if GO_ROOT is not set (it does this, but it probably scrolled off the screen, but it doesn't matter anyway, because GO_ROOT only needs to be set if go-perl is not installed in the normal perl library path).
make sudo make install make load_schema make prepdb rm -rf ./tmp make ontologies Available ontologies:  Relationship Ontology  Sequence Ontology  Gene Ontology  Chado Feature Properties  Cell Ontology  Plant Ontology Which ontologies would you like to load (Comma delimited)?  1,2,4
You can pick any ontologies you want, but GO will take over an hour to install, and it doesn't matter to much what you pick, because we are going to be blowing this database away soon anyway.
Saving your progress to this point
It is generally a good idea to save your progress when you are done loading ontologies before you've attempted to load any other data. That way, if something goes wrong, it is very easy to restore to this point. To make a db dump, do this:
pg_dump chado | bzip2 -c --best > db_w_ontologies.bz2
The -c tells bzip2 to take input on standard in and spit it out on stdout. To restore from a dump, drop and recreate the database and then uncompress the dump into it, like this:
dropdb chado createdb chado bzip2 -dc db_w_ontologies.bz2 | psql chado
But you don't need to do any of that, because I've provided you with a dump with only ontologies in it.
A Note about installing GO
There is a bug in either the go-perl parser or more likely in stag_storenode.pl that shows itself when installing GO. The problem is that it changes the ownership of the 'part_of' relationship term from the relationship ontology to GO. The result is that both Apollo (I think) and the GFF3 loader will fail when it can't find the part_of term. The easiest way to fix this is by issuing a SQL command in the psql shell:
UPDATE cvterm SET cv_id = (SELECT cv_id FROM cv WHERE name='relationship') WHERE name='part_of' and cv_id in (SELECT cv_id FROM cv WHERE name='gene_ontology');
But we don't have to do that now because we didn't load GO.
A Note about Redos
If at some point you feel like you want to rebuild your database from scratch, you need to get rid of the temporary directory where ontology files are stored. You can either do this with rm -rf ./tmp (be very careful about that ./ in front of tmp) or with a make target that was designed for this: make rm_locks, which only gets rid of the lock files but leaves the ontology files in place.
Reload a new copy of the DB with ontologies in it
dropdb chado createdb chado bzip2 -dc ontologies_only.bz2 | psql chado
Preparing GFF data for loading
- splitting 'annotations' from 'computational analysis'
- very large files
- fasta → gff
- utility scripts for various preparation activities
Working with Large GFF files
Large files (more than 3-500,000 rows) can cause headaches for the GFF3 bulk loader, but genome annotation GFF3 files can frequently be millions of lines. What to do? The script gmod_gff3_preprocessor.pl will help with this, both by splitting the files in to reasonable size chunks and sorting the output so that it make sense.
Note that if your files are already sorted (and in this case, that means that all parent features come before their child features and lines that share IDs (like CDSes sometimes do) are together), then all you need to do is split your files. Frequently files are already sorted and avoiding sorting is good for two reasons:
- It takes a long time to do the sort compared to the split (it parses every line, loads it into temporary tables and pulls out lines via query to rebuild the GFF file)
- The sorting process makes it more difficult to read the resulting GFF, since the parent feature will no longer be near the child features in the GFF file. Chado doesn't care about that but you might.
- This has already been done.
##genome-build maker genome::1.00 ##sequence-region scf1117875582023 1 719819
need to be deleted. I'll work on fixing BioPerl to deal with this more gracefully.
Working with the large number of files that come out of the preprocessor can be a bit of a headache, so I've 'developed' a few tricks. Basically, I make a bash script that will execute all of the loads at once. The easy way (for me) to do this is:
ls *.gff3 > load.sh vi load.sh
and then use vim regex goodness to write the loader commands into every line of the file:
:% s/^/gmod_bulk_load_gff3.pl -a --skip_vacuum -g /
which puts the command at the beginning every line. The skip_vacuum option tells the loader not to do a VACUUM ANALYZE after it is done running. The -a tells the loader that it is working with analysis results, so the scores need to be store in the companalysis module. While the genes file are not really analysis results, I faked them to look like FgenesH results so that Apollo would have something to work with. Then I manually edit the file to add and remove the few modifications I need:
- Remove the -a from the chromosome line (it isn't an analysis result)
- Add --noexon to the genes file (since it contains both CDS and exon features, I don't want the loader to create exon features from CDS features).
Then I add #!/bin/bash at the top of the file (which isn't actually needed) and run
and it loads all of the files sequentially. I have done this with 60-70 files at a time, letting the loader run for a few days(!)
Capturing the output to check for problems
If you are going to let a load run a very long time, you probably should capture the output to check for problems. There are two ways to do this:
- run inside the screen command:
screen -S loader
which creates a new 'screen' separate from your login. Then execute the load command in the screen. To exit the screen but let your load command continue running in it, type ctrl-a followed by a d (to detach) and you get your original terminal back. To reconnect to the loader screen, type
screen -R loader</tt>
- capture stdout and stderr to a file
When you run the load command, you can use redirection to collect the stdout and stderr to a file:
bash load.sh >& load.output
Really loading data
OK, let's put the data into chado:
cd ~/Documents/Data/maker/example2_pyu/finished.maker.output/finished_datastore/scf1117875582023 cp scf1117875582023.gff ~ cd gmod_bulk_load_gff3.pl -g scf1117875582023.gff
This should not work (yet!)
Adding our organism
We told the installer that we were going to use an organism named "Pythium", but Chado doesn't know about it. We need to add it to the database:
psql chado chado> SELECT * FROM organism; chado> INSERT INTO organism ( abbreviation, genus, species, common_name) VALUES ('P.ultimum','Pythium','ultimum','Pythium'); chado> \q
gmod_bulk_load_gff3.pl -g scf1117875582023.gff
Oops. Forgot to edit the GFF file
Remove the # line from MAKER that the loader can't cope with. Remove this line
##sequence-region scf1117875582023 1 719819
gmod_bulk_load_gff3.pl -g scf1117875582023.gff
Doh! The loader is trying to tell us that this looks like analysis data (that is, data produced by computer rather than humans).
We need to tell the loader that it is in fact analysis results:
gmod_bulk_load_gff3.pl -a -g scf1117875582023.gff
The name of the data file is GMOD_sample_data.gff, as distributed in the zipped sample data at the beginning.
Kill, kill, kill! (ctrl-c) the load as soon as you see this message:
There are both CDS and exon features in this file, but you did not set the --noexon option, which you probably want. Please see `perldoc gmod_bulk_load_gff3.pl for more information.
Argh! Now the loader is pointing out that this GFF file has both exons and CDS features and Chado prefers something a little different. While the loader will load this data as written, it won't be "standard." Instead, we'll add the --noexon option (which tells the loader not to create exon features from the CDS features, since we already have them). One (at least) more time:
gmod_bulk_load_gff3.pl -a -g scf1117875582023.gff --noexon
Success! (Probably.) If this failed, when rerun, we may need the --recreate_cache option, which recreates a "temporary" table that the loader uses to keep track of IDs.
Loading other data
Chado for Expression, Genotype, Phenotype, and Natural Diversity
This section is about some of the lesser used Chado Modules.
From an organism database point of view, expression is about turning this:
|gsc||wild type (unspecified), MO:diaph2,pfn1||Bud||prechordal plate||ISH|
|gsc||wild type (unspecified), MO:diaph2||Bud||prechordal plate||ISH|
|ntla||wild type (unspecified), MO:diaph2,pfn1||Bud||notochord||ISH|
|ntla||wild type (unspecified), MO:diaph2,pfn1||Bud||tail bud||ISH|
|ntla||wild type (unspecified), MO:diaph2||Bud||notochord||ISH|
|ntla||wild type (unspecified), MO:diaph2||Bud||tail bud||ISH|
|From ZFIN: Figure: Lai et al., 2008, Fig. S5|
What defines an expression pattern?
Expression could include a lot of different things, and what it includes depends on the community:
- Gene, or Transcript, or Protein, or ...
- What are we measuring?
- Store single stages, or stage windows?
- What do stage windows or adjacent stages mean: throughout window, or we aren't sure on the stage (high-throughput)
- Strain / Genotype
- What geneotype did we see this on.
- Does "expressed in brain" mean expressed in all, most, or somewhere in brain?
- ISN, antibody, probe, ...
- Publication, high-throughput screen, lab, project, ...
- Do you want to keep track of homogeneous, graded, spotty
- Not Expressed
- Do you keep track of absence? If so, what does it mean? (not detected)
- Is an image required or optional?
- What do strengths mean across different experiments?
How does Chado deal with this variety?
Post-composition, which is a very Chadoish way of doing things.
- Embrace a minimal definition of what an expression pattern is. In Chado, all that is required is a name, e.g., SLC21A in GMOD Course Participant kidney after 4 days at NESCent, or just "GMOD0002347".
- You can also provide a description. If your name is "GMOD0002347", this may be a good idea
- Details are then hung off that.
A specific example from FlyBase:
Here is an example of a simple case of the sort of data that FlyBase curates.
- The dpp transcript is expressed in embryonic stage 13-15 in the cephalic segment as reported in a paper by Blackman et al. in 1991.
This would be implemented in the expression module by linking the dpp transcript feature to expression via feature_expression. We would then link the following cvterms to the expression using expression_cvterm:
- embryonic stage 13 where the cvterm_type would be stage and the rank=0
- embryonic stage 14 where the cvterm_type would be stage and the rank=1
- embryonic stage 15 where the cvterm_type would be stage and the rank=1
- cephalic segment where the cvterm_type would be anatomy and the rank=0
- in situ hybridization where the cvterm_type would be assay and the rank=0
In FlyBase, this would be a single expression record, with 5 Ontology/CV terms attached to it.
- 1 saying what anatomy the expression is for - cephalic segment
- 1 saying that the assay type was in situ hybridization
- 1 each for each of the 3 stages - embryonic stages 13, 14, 15
- 1 record saying this expression pattern is for dpp.
- 1 record saying this expression pattern is from Blackman et al. in 1991.
The Expression module table design allows each expression pattern to have
- 1 name
- 0, 1 or more
- Publications / Sources
- Anatomy terms
- Assay types
- Any other CV/Ontology term (e.g., detected, not detected)
- Two Key Points
- Chado can support whatever your community decides your definition of an expression pattern is.
- However, Chado will not enforce that definition for you.
If you require an expression pattern to have
- 1 name
- 1 publication/source
- 1 Feature
- 1 anatomy term
- 1 stage
- 1 assay type
- 1 detected / not detected flag
- 0, 1 or more images
then you will have to write a script to check that.
|uniquename||text||UNIQUE NOT NULL|
Tables referencing this one via Foreign Key Constraints:
- expression_cvterm, expression_image, expression_pub, expressionprop, feature_expression, wwwuser_expression
|expression_id||integer||UNIQUE#1 NOT NULL|
|cvterm_id||integer||UNIQUE#1 NOT NULL|
|cvterm_type_id||integer||UNIQUE#1 NOT NULL|
Tables referencing this one via Foreign Key Constraints:
Defined in the Chado Genetic Module.
A genotype in Chado is basically a name, with a pile of features associated to it.
This used to mean a set of alleles.
Not sure how strains have been handled
In the era of high-throughput sequencing, this may be much more detailed.
Genetic context. A genotype is defined by a collection of features, mutations, balancers, deficiencies, haplotype blocks, or engineered constructs.
Optional alternative name for a genotype, for display purposes.
|uniquename||text|| UNIQUE NOT NULL |
The unique name for a genotype; typically derived from the features making up the genotype.
Tables referencing this one via Foreign Key Constraints:
|feature_id||integer||UNIQUE#1 NOT NULL|
|genotype_id||integer||UNIQUE#1 NOT NULL|
|chromosome_id||integer|| UNIQUE#1 |
A feature of SO type "chromosome".
|rank||integer|| UNIQUE#1 NOT NULL |
rank can be used for n-ploid organisms or to preserve order.
|cgroup||integer|| UNIQUE#1 NOT NULL |
Spatially distinguishable group. group can be used for distinguishing the chromosomal groups, for example (RNAi products and so on can be treated as different groups, as they do not fall on a particular chromosome).
|cvterm_id||integer||UNIQUE#1 NOT NULL|
Also defined in the Chado Genetic Module.
The environmental component of a phenotype description.
|uniquename||text||UNIQUE NOT NULL|
Tables referencing this one via Foreign Key Constraints:
|environment_id||integer||UNIQUE#1 NOT NULL|
|cvterm_id||integer||UNIQUE#1 NOT NULL|
Phenotype, Natural Diversity and Atlas Support
Phenotypes, natural diversity, and atlas support are all areas of future work in Chado. Chado does have a phenotype module, but it has not aged as well as other modules. Its support for natural diversity is limited to what is implemented in the genotype, environment, and phenotype modules. This is not robust enough to deal with studie that are frequently done in the plant community, and are increasingly done in animal communities as well.
To address this there are several efforts currently underway. The Aniseed project includes 4 dimensional anatomy, expression, and cell fate graphical atlases. Aniseed is currently in the process of reimplementing itself to use Chado. This work is likely to lead to contributions back to GMOD (both in Chado and a web interface) to better support these types of atlases.
Better natural diversity support will be added in the coming year. NESCent has developed a prototype natural diversity Chado module based on the GDPDM, that will added robust support for natural diversity data.