Difference between revisions of "Template:ChadoTable analysisfeature"

From GMOD
Jump to: navigation, search
(Table definition for Chado on 2010/11/24)
(Table definition for Chado on 2010/11/24)
 
Line 25: Line 25:
  
 
{{ChadoTablesReferencingHeader|analysisfeature}}
 
{{ChadoTablesReferencingHeader|analysisfeature}}
* {{ChadoReferencingTable|Companalysis|analysisfeatureprop}}
+
{{ChadoReferencingTable|Companalysis|analysisfeatureprop}}
 
</protect>
 
</protect>

Latest revision as of 19:57, 24 November 2010

This template is a Chado Table Template. It defines a single table from the Chado schema, and every Chado table has a template like this one. This template is automatically included in two places:

  1. The module page for the module the table is a part of. This is where updates and comments should be posted.
  2. The Chado Tables page, which lists all tables.

You can include this template anywhere you want to show the table description.


Table: analysisfeature
Module: Companalysis

Computational analyses generate features (e.g. Genscan generates transcripts and exons; sim4 alignments generate similarity/match features). analysisfeatures are stored using the feature table from the sequence module. The analysisfeature table is used to decorate these features, with analysis specific attributes. A feature is an analysisfeature if and only if there is a corresponding entry in the analysisfeature table. analysisfeatures will have two or more featureloc entries,

with rank indicating query/subject
analysisfeature columns
FK Name Type Description
analysisfeature_id serial PRIMARY KEY
feature feature_id integer UNIQUE#1

NOT NULL

analysis analysis_id integer UNIQUE#1

NOT NULL

rawscore double precision This is the native score generated by the program; for example, the bitscore generated by blast, sim4 or genscan scores. One should not assume that high is necessarily better than low.
normscore double precision This is the rawscore but
   semi-normalized. Complete normalization to allow comparison of
   features generated by different programs would be nice but too
   difficult. Instead the normalization should strive to enforce the
   following semantics: * normscores are floating point numbers >= 0,
   * high normscores are better than low one. For most programs, it would be sufficient to make the normscore the same as this rawscore, providing these semantics are satisfied.
significance double precision This is some kind of expectation or probability metric, representing the probability that the analysis would appear randomly given the model. As such, any program or person querying this table can assume the following semantics:
  * 0 <= significance <= n, where n is a positive number, theoretically unbounded but unlikely to be more than 10
 * low numbers are better than high numbers.
identity double precision Percent identity between the locations compared. Note that these 4 metrics do not cover the full range of scores possible; it would be undesirable to list every score possible, as this should be kept extensible. instead, for non-standard scores, use the analysisprop table.

Tables referencing analysisfeature via foreign key constraints: