= Installation of KAGRA summary pages = * Writer : Duncan == ROOT set up == '''The following should be performed as the `root` user''' * Add Debian repositories as [[https://wiki.debian.org/SourcesList#A.2Fetc.2Fapt.2Fsources.list|here]] * Install `ssh-server`: {{{#!highlight bash apt-get update apt-get install openssh-server systemctl start ssh }}} * Install Apache (for web server): {{{#!highlight bash apt-get install apache2 a2enmod userdir systemctl restart apache2 }}} * To enable `.htaccess` files within the `userdir` configuration for Apache, the `all` option was added to the `` `AllowOverride` directive as follows: {{{#!highlight xml ... AllowOverride FileInfo AuthConfig Limit Indexes AllowOverride ... }}} * Add LIGO Debian repositories as [[https://wiki.ligo.org/Computing/DASWG/DebianJessie?redirectedfrom=Computing/DASWG.SoftwareOnDebian#Configure_Repositories|here]], and install keyring to trust: {{{#!highlight bash apt-get install lscsoft-archive-keyring }}} * Installed basic dependencies: {{{#!highlight bash apt-get install \ git \ python-pip \ python-virtualenv \ python-nds2-client \ lal-python \ ldas-tools-framecpp-python \ python-gwpy \ lalapps }}} == Summary pages setup == * The following is performed as the `controls` user * Set up a virtualenv for the summary page code: {{{#!highlight bash python -m virtualenv ~/opt/summary-2.7 --system-site-packages }}} * Activate the virtualenv to enter that environment: {{{#!highlight bash source ~/opt/summary-2.7/bin/activate }}} * And then install everything we need: {{{#!highlight bash python -m pip install \ "gwpy>=0.12.0" git+https://github.com/gwpy/gwsumm.git }}} == Automation setup == * Install `HTCondor`: {{{#!highlight bash apt-get install htcondor }}} === Automation notes === The automation is handled using HTCondor's timescheduling system. * If the Condor queue is empty, you need to restart the scheduled jobs: {{{#!highlight bash cd /home/controls/etc/summary/condor condor_submit gw_daily_summary_kagra.sub condor_submit gw_daily_summary_rerun_kagra.sub }}} these jobs will persist in the Condor queue. If the persistens jobs are held, '''you cannot `condor_release` them''', you must `condor_rm` and `condor_submit` again. For any other jobs (`gw_summary`), `condor_release` works fine == Monitoring setup == I (Duncan) have installed [[http://ganglia.info|Ganglia]] on `k1sum0` to allow web-based monitoring of the CPU/memory/disk/network etc. This is achieved as follows: * Install ganglia software: {{{#!highlight bash apt-get install ganglia-monitor gmetad ganglia-webfrontend }}} * Link Ganglia Apache configuration and restart: {{{#!highlight bash ln -s /etc/ganglia-webfrontend/apache.conf /etc/apache2/sites-enabled/ganglia.conf }}} * Edit configuration files to include some useful information: {{{ /etc/ganglia/gmond.conf # edit cluster parameters around line 20 /etc/ganglia/gmetad.conf # edit gridname on line 72 }}} * Restart all of the necessary services {{{#!highlight bash systemctl restart ganglia-monitor gmetad apache2 }}} The Ganglia output is viewable on http://k1sum0/ganglia/. Ganglia is designed to monitor an entire cluster of machines; [[https://www.digitalocean.com/community/tutorials/introduction-to-ganglia-on-ubuntu-14-04#client-installation|this page]] might be useful if you wish to add other machines to the Ganglia output. == After Installation == * Writer : TYamamoto After the installation of gwpy and the daily-summary tools, TYo installed emacs. {{{ > su > apt-get install emacs }}} After the installation of gwpy and the daily-summary tools, TY installed sudo and nfs-common. nfs-common is required we use nfs as the mount type on /etc/fstab. {{{ > su - > apt-get install sudo nfs-common > emacs -nw /etc/fstab k1nfs0:/export/users /users nfs rw,bg,soft 0 0 > mount -a }}}