Archive Ensembl HomeArchive Ensembl Home
eHive production system - Installation

eHive production system - Installation

Download and install the necessary external software

Note: You may have these packages already installed in your system.

  1. Perl 5.6 or higher, since eHive code is written in Perl.
  2. MySQL 5.1 or higher
    eHive keeps its state in a MySQL database, so you will need
    • a MySQL server installed on the machine where you want to maintain the state and
    • MySQL clients installed on the machines where the jobs are to be executed.
    MySQL version 5.1 or higher is recommended to maintain compatibility with Compara pipelines.
  3. Perl DBI API
    Perl database interface that includes API to MySQL

Download and install essential and optional packages from BioPerl and EnsEMBL CVS

  1. Create a directory for the source code.
    It is advised to have a dedicated directory where EnsEMBL-related packages will be deployed. Unlike DBI modules that can be installed system-wide by the system administrator, you will benefit from full (read+write) access to the EnsEMBL files/directories, so it is best to install them under your home directory. For example,
    $ mkdir $HOME/ensembl_main
    It will be convenient to set a variable pointing at this directory for future use:
    • using bash syntax (for best results, append this line to your ~/.bashrc configuration file):
      $ export ENSEMBL_CVS_ROOT_DIR="$HOME/ensembl_main"
    • using [t]csh syntax (for best results, append this line to your ~/.cshrc or ~/.tcshrc configuration file):
      $ setenv ENSEMBL_CVS_ROOT_DIR "$HOME/ensembl_main"
  2. Change into your ensembl codebase directory:
  3. Log into the BioPerl CVS server (using "cvs" for password):
    $ cvs -d login
  4. Export the bioperl-live package:
    $ cvs -d export bioperl-live
  5. Log into the EnsEMBL CVS server at Sanger (using "CVSUSER" for password):
    $ cvs -d login
    Logging in to
    CVS password: CVSUSER
  6. Export ensembl and ensembl-hive CVS modules:
    $ cvs -d checkout ensembl
    $ cvs -d checkout ensembl-hive
  7. In the likely case you are going to use eHive in the context of Compara pipelines, you will also need to install ensembl-compara:
    $ cvs -d checkout ensembl-compara
  8. Add new packages to the PERL5LIB variable:
    • using bash syntax (for best results, append these lines to your ~/.bashrc configuration file):
      $ export PERL5LIB=${PERL5LIB}:${ENSEMBL_CVS_ROOT_DIR}/bioperl-live
      $ export PERL5LIB=${PERL5LIB}:${ENSEMBL_CVS_ROOT_DIR}/ensembl/modules
      $ export PERL5LIB=${PERL5LIB}:${ENSEMBL_CVS_ROOT_DIR}/ensembl-hive/modules
      $ export PERL5LIB=${PERL5LIB}:${ENSEMBL_CVS_ROOT_DIR}/ensembl-compara/modules # optional but recommended, see previous point.
    • using [t]csh syntax (for best results, append these lines to your ~/.cshrc or ~/.tcshrc configuration file):
      $ setenv PERL5LIB  ${PERL5LIB}:${ENSEMBL_CVS_ROOT_DIR}/bioperl-live
      $ setenv PERL5LIB  ${PERL5LIB}:${ENSEMBL_CVS_ROOT_DIR}/ensembl/modules
      $ setenv PERL5LIB  ${PERL5LIB}:${ENSEMBL_CVS_ROOT_DIR}/ensembl-hive/modules
      $ setenv PERL5LIB  ${PERL5LIB}:${ENSEMBL_CVS_ROOT_DIR}/ensembl-compara/modules # optional but recommended, see previous point.

Useful files and directories of the eHive repository

  1. In ensembl-hive/scripts we keep perl scripts used for controlling the pipelines. Adding this directory to your $PATH may make your life easier.
    • is used to create hive databases, populate hive-specific and pipeline-specific tables and load data
    • is used to run the pipeline; send 'Workers' to the 'Meadow' to run the jobs of the pipeline
  2. In ensembl-hive/modules/Bio/EnsEMBL/Hive/PipeConfig we keep example pipeline configuration modules that can be used by . A PipeConfig is a parametric module that defines the structure of the pipeline. That is, which analyses with what parameters will have to be run and in which order. The code for each analysis is contained in a RunnableDB module. For some tasks bespoke RunnableDB have to be written, whereas some other problems can be solved by only using 'universal buliding blocks'. A typical pipeline is a mixture of both.
  3. In ensembl-hive/modules/Bio/EnsEMBL/Hive/RunnableDB we keep 'universal building block' RunnableDBs:
    • is a parameter substitution wrapper for any command line executed by the current shell
    • is a parameter substitution wrapper for running any MySQL query or a session of linked queries against a particular database (eHive pipeline database by default, but not necessarily)
    • is a universal module for dynamically creating batches of same analysis jobs (with different parameters) to be run within the current pipeline
  4. In ensembl-hive/modules/Bio/EnsEMBL/Hive/RunnableDB/LongMult we keep bespoke RunnableDBs for long multiplication example pipeline.