Skip over global navigation links

Helix Systems

The Helix/Biowulf staff will present a series of classes in Apr 2014. All 3 classes will be hands-on, including the Biowulf class.

Classes are free but registration is required. Helix/Biowulf users will have priority enrollment, and new users are highly encouraged to attend.

Introduction to Linux ---------------------- Date & Time: Tuesday, Apr 8, 9 am - 4 pm Location: Bldg 12A, Rm B51. This class is intended as a starting point for individuals new to Linux and UNIX. The class will center on basic UNIX/Linux concepts: logging in, navigating the file system, commands for interacting with files, running and viewing processes, checking disk space and other common tasks. The class will also cover the use of some services specific to Helix/Biowulf usage.

Bash Shell Scripting for Helix and Biowulf ------------------------------------------ Date & Time: Tuesday, Apr 15, 9 am - 4 pm Location: Bldg 12A, Rm B51. The default shell on many Linux systems is bash. Bash shell scripting provides a method for automating common tasks on Linux systems (such as Helix and Biowulf), including transferring and parsing files, creating qsub and swarm scripts, pipelining tasks, and monitoring jobs. This class will give a hands-on tutorial on how to create and use bash shell scripts in a Linux environment.

NIH Biowulf Cluster: Scientific Computing ----------------------------------------- Date & Time: Tuesday, Apr 29, 9 am - 4 pm Location: Bldg 12A, Rm B51 Morning: Introduction to the Biowulf Linux cluster, cluster concepts, accounts, logging in, storage options, interactive vs. batch jobs, how to set up and submit a simple batch job, batch queues, available software, and job monitoring. Afternoon: Hardware and network configuration, types of nodes, selection of nodes using properties, system software, parallel programs, programming tools.


Support for the CHARMM molecular simulation package on Biowulf has been updated to include the Feb 2014 c38b2 release, which includes many new features and notable speed improvements. The online documentation has also been updated, both the Biowulf-specific pages, as well as adding the HTML version of the c38b2 documentation. For additional details, see

New features include:

* A domain decomposition (DOMDEC) scheme for fast MD; see domdec.doc * Full conversion to Fortran95, allowing system and array sizes to be specified at run time; see dimens.doc * An interface to the Q-Chem program for QM/MM calculations; see qchem.doc * A facility for replica exchange simulations; see repdstr.doc * A source code based interface mechanism to external programs; see mscale.doc

There is also GPU support via a CHARMM interface to OpenMM available, which may be added if there is sufficient interest.

The DOMDEC code is both significantly faster on the same number of cores, and scales to larger number of cores much more readily than earlier release versions. Benchmarks indicate performance that is often competitive with NAMD, and typically no worse than a factor of two slower, a substantial improvement.


-- Rick Venable 5635 FL/T906 Membrane Biophysics Section NIH/NHLBI Lab. of Computational Biology Bethesda, MD 20892-9314 U.S.A. (301) 496-1905 venabler AT nhlbi*nih*gov


The Biowulf/Helix staff is pleased to announce the availability of Globus, a service that makes it easy to move, sync or share large amounts of data. Globus, developed at Argonne National Labs, is software that provides a convenient interface for data transfer and sharing.

A short introduction to Globus will be presented by Biowulf staff as part of the NIH Workshop on Advanced Networking for Data-Intensive Biomedical Research on Wednesday, Apr 9, 2014, at 1.30 pm in Room D, Natcher Conference Center. (

Information about setting up a Globus account, transferring data, and sharing with collaborators is at

Please contact with questions.

Biowulf/Helix staff.


A hardware failure on the clusternet core switch resulted in a large portion of the cluster going offline at around 10:10 am this morning. The problem was fixed by 11:20 but jobs may have been affected.

Please check the status of your running jobs.

Steven Fellini Biowulf staff


The Global Optimization Toolbox for Matlab is available as a trial installation on the Helix systems. This toolbox provides methods that search for global solutions to problems that contain multiple maxima or minima.

The toolbox is available on helix/biowulf until June 6, 2014. We are considering purchasing this product if there is sufficient interest from our user community. There will be a Webex/Conference call with a 1 hour overview of the Global Optimization toolbox.

------------------------------------- Meeting Information ------------------------------------ Topic: Global Optimization in MATLAB Date: Thursday, May 22, 2014 Time: 11:00 am, Eastern Daylight Time (New York, GMT-04:00) Meeting Number: 599 228 900 Meeting Password: (This meeting does not require a password.)

To Join the audio, please call 866-872-4258 Conf code # 280 235 5657

------------------------------------------------------- To join the online meeting (Now from mobile devices!) ------------------------------------------------------- 1. Go to

2. If requested, enter your name and email address. 3. If a password is required, enter the meeting password: (This meeting does not require a password.) 4. Click "Join".


This message only affects users who are mounting the Helix Protein Data Bank (PDB) mirror on their local machine.

Starting Monday June 16, 2014 the PDB mirror will no longer be available via an NFS mount. Instead, the mirror can now be mapped to your Mac, Windows, or Linux desktop via Samba. Instructions for mapping a Samba mount point can be found at ('Mapped Network Drive'). Use as the server address. You will need to authenticate with your NIH login username and password.

Please send questions to

Thank you. The Helix Systems Staff


The Helix/Biowulf staff will present a short introduction and overview of Galaxy on Helix Systems. This 3-hour class will be taught on Tuesday, July 8 from 9am - 12pm in building 12, room B51.

Galaxy on Helix Systems is a web-based portal to command-line tools on the Biowulf cluster, specifically designed for intramural researchers at NIH. The class will demonstrate the use of Galaxy for:

* transferring datasets in and out of Galaxy * running jobs * visualizing results * organizing datasets in a library * automating jobs with workflows * troubleshooting common errors

Galaxy on Helix Systems requires a Helix account and a web browser.

Registration is required and is now handled through our new class registration system:

Helix Systems Staff=


On Thursday June 26 at 10 pm and lasting until Friday June 27 at 6 am, the NIH networking group will be moving the network switch that provides users connectivity to the Helix/Biowulf systems. During this maintenance window, there may be a brief interruption in connectivity to the Helix and Biowulf systems, but the impact should be minimal. Please note that this will have no impact on any running Biowulf jobs.

Helix Systems Staff


Imputing big data from GWAS

A discussion of imputation and large scale meta-analyses of GWAS data on the Biowulf cluster featuring hands on examples and experienced instruction.

Instructor: Michael Nalls (NIA) Date: Wed Sep 10 2014 Time: 9 am - noon Location: Bldg 12, Rm B51

The class is free but registration is required. Please register at A Helix & Biowulf account is required for the hands-on exercises in this class. Contact if you have questions about your account.

[Michael Nalls is the Head of Statistical Genetics at the Lab. of Neurogenetics, NIA. His work focuses on creating analytic pipelines for health informatics by applying cutting edge statistical methodologies to massive datasets as a means of facilitating the investigation of neurological diseases and complex traits. Recent paper: Large-scale meta-analysis of genome-wide association data identifies six new risk loci for Parkinson's disease. Nalls, Pankratz, et al.Nature Genetics, doi:10.1038/ng.3043. (2014)]


This morning a hardware failure on the cluster network's core switch caused several hundred nodes to go offline which in turned caused filesystem problems such as: "Stale NFS file handle"

During this time job scheduling was turned off to prevent new jobs from starting on nodes affected by the problem. Scheduling has now been turned back on.

Please check the health of your jobs, and resubmit if necessary.

Steven Fellini Biowulf staff

Up to Top

This page last reviewed: December 14, 2010