Skip over global navigation links

Helix Systems

11/5/2014
The Parallel Computing Toolbox for Matlab is available as a trial installation on the Helix systems. This toolbox lets you solve computationally and data-intensive problems using multicore processors, GPUs and computer clusters. For instructions on how to use the toolbox on the NIH Biowulf cluster, see

http://biowulf.nih.gov/apps/matlabdce.html

The toolbox is available on helix/biowulf until November 14, 2014 (this will probably be extended). We are considering purchasing this product if there is sufficient interest from our user community. There will be a Webex/Conference call with a 1.5 hour overview of the Parallel Computing toolbox.

------------------------------------- Meeting Information ------------------------------------ Topic: Parallel Computing with MATLAB Date: Thursday, November 6, 2014 Time: 1:00 pm, Eastern Daylight Time (New York, GMT-04:00) Meeting Number: 592 633 643 Meeting Password: (This meeting does not require a password.)

To Join the audio, please call 866-872-4258 Conf code # 590 420 6802

------------------------------------------------------- To join the online meeting (Now from mobile devices!) ------------------------------------------------------- 1. Go to https://mathworks.webex.com/mathworks/j.php?ED=314126497&UID=0&RT=MiMxMQ%3D%3D 2. If requested, enter your name and email address. 3. If a password is required, enter the meeting password: (This meeting does not require a password.) 4. Click "Join".




11/24/2014

ACEMD is a a high performance molecular dynamics code for biomolecular systems designed specifically for NVIDIA GPUs. Simple and fast, ACEMD uses very similar commands and input files of NAMD and output files as NAMD or Gromacs. Some NIH users have reported excellent simulation performance with ACEMD.

ACEMD is a commercial product, and we currently have 2 test licenses (for 2 GPU nodes) available on Helix. Information about running ACEMD on Biowulf is available at http://biowulf.nih.gov/apps/acemd.html

We are considering the purchase of some ACEMD licenses for the Biowulf GPU nodes if there is sufficient interest from the Biowulf user community. Please let us know whether you would be interested in continuing to use ACEMD on Biowulf.

Susan.


12/2/2014

The Helix/Biowulf staff will present a series of classes in January/February 2015. All 3 classes will be hands-on, including the Biowulf class.

Classes are free but registration is required. Helix/Biowulf users will have priority enrollment, and new users are highly encouraged to attend.

Introduction to Linux ---------------------- Date & Time: Mon, Jan 26, 9 am - 4 pm Location: Bldg 12A, Rm B51. This class is intended as a starting point for individuals new to Linux and UNIX. The class will center on basic UNIX/Linux concepts: logging in, navigating the file system, commands for interacting with files, running and viewing processes, checking disk space and other common tasks. The class will also cover the use of some services specific to Helix/Biowulf usage.

Bash Shell Scripting for Helix and Biowulf ------------------------------------------ Date & Time: Tue, Jan 27, 9 am - 4 pm Location: Bldg 12A, Rm B51. The default shell on many Linux systems is bash. Bash shell scripting provides a method for automating common tasks on Linux systems (such as Helix and Biowulf), including transferring and parsing files, creating qsub and swarm scripts, pipelining tasks, and monitoring jobs. This class will give a hands-on tutorial on how to create and use bash shell scripts in a Linux environment.

NIH Biowulf Cluster: Scientific Computing ----------------------------------------- Date & Time: Thu, Feb 19, 9 am - 4 pm Location: Bldg 12A, Rm B51 Morning: Introduction to the Biowulf Linux cluster, cluster concepts, accounts, logging in, storage options, interactive vs. batch jobs, how to set up and submit a simple batch job, batch queues, available software, and job monitoring. Afternoon: Hardware and network configuration, types of nodes, selection of nodes using properties, system software, parallel programs, programming tools.

Registration for all three classes is available at http://helixweb.nih.gov/nih/classes/index.php




12/10/2014

In order to improve fair user access to the modest number of Infiniband- connected nodes on the Biowulf Cluster, effective immediately, a 240 hour walltime limit will be placed on jobs allocating nodes with the "ib" resource. Jobs running for longer than 240 hours will be automatically terminated by the batch system.

Jobs having run for more than 240 hours as of Thursday morning will be deleted at that time; they can be restarted from checkpoint files.

Users who wish to run simulations for longer than 240 hours may wish to use a "job chaining" strategy which is explained in the following web pages:

For GROMACS, http://biowulf.nih.gov/apps/gromacs.html#chain For NAMD, http://biowulf.nih.gov/apps/namd.html#chain For AMBER, http://biowulf.nih.gov/apps/amber.html#chain For CHARMM, http://biowulf.nih.gov/apps/charmm/#daisy

Please send questions and comments to staff@biowulf.nih.gov In particular, if you have better ways to set up chaining jobs than described above, please let us know.




11/25/2014

Effective immediately, the 'ls' command will no longer be aliased to 'ls --color'. This alias allows various file types to displayed in different colors in your terminal window. However, it also results in additional filesystem processing which can result in poor 'ls' response times, especially for directories containing large numbers of files.

In order to improve filesystem performance for all users, the Helix Staff strongly recommends that users not use the "--color" option to the 'ls' command. If you are setting this alias in your .bashrc file you might want to consider removing it for better directory listing performance.

If you have any questions, please contact the NIH Helix Systems staff at staff@helix.nih.gov.




1/12/2015

On Saturday, January 17, 2015, from 7:00 AM to 3:00 PM, maintenance will be performed on the chilled water system in the building 12 data center. The NIH High Performance Computer (HPC) chilled water vendor will be performing this work in coordination with the CIT Data Center Facilities and ORF building 12 facilities teams. The impact to Helix and Biowulf systems has been designated as "Possible Service Affecting - Low" meaning there is a low possibility a service-affecting event will occur. If an issue does occur, however, there will be a six to eight hour cooling outage, which will require Helix and Biowulf to shut down according to established contingency plans. While CIT is taking every precaution to minimize the risk, there is a small chance that a service interruption will occur during the maintenance window. If such an incident occurs, CIT will provide additional information at the time of the event.

If you have any questions, please contact the Helix Systems staff at staff@helix.nih.gov.

Helix Systems Staff




1/14/2015

Join the High Performance Computing Facility Team and Support Leading Edge Biomedical Research at the National Institutes of Health!

The Biowulf staff advances research at NIH by planning, managing, and supporting the NIH High Performance Computing (HPC) Facility, including general-purpose and high-performance computers, storage, and the biomedical applications that run on them. We are currently looking for an enthusiastic individual who will work with our staff of computational biologists and system administrators to

  • Provide guidance, support, training and advice in computational biology systems to NIH intramural researchers
  • Ensure that computational biology applications and services are current, appropriate and sufficient to serve researchers needs
  • Communicate with researchers to answer questions, troubleshoot problems and develop appropriate computational strategies
  • Install and update applications, document system changes for staff, write and maintain documentation for users
  • Consult and collaborate with system administrator coworkers to determine best system configurations for applications
  • Remain abreast of current and emerging computational biology technologies and tools


Required skill set: Genomics knowledge; hands-on experience with a broad spectrum of genomics applications and next-generation sequence data analysis; experience with biological databases; experience working in a HPC environment; ability to install applications; excellent oral and written communication skills; experience presenting and/or teaching; ability to work both independently and as part of the team; understanding of bottlenecks associated with genome-scale data analysis; experience with shell scripting; ability to compile and build applications; ability to develop reasonable tests and example input for genomic applications; self-motivated with excellent organizational, troubleshooting and problem-solving skills; ability to learn about new technologies or software quickly

Other desirable skills: working knowledge of R; working knowledge of Matlab; working knowledge of MySQL; ability to program in at least one of the following languages: Python, C/C++, Java or Perl

Candidate must have a PhD in the biological sciences with one to two years experience in genomic analysis. This is a contractor position.

If interested, send your resume to Steve Bailey at steve.bailey@nih.gov




2/18/2015

Due to some last-minute dropouts, there are a few spots available in tomorrow's Biowulf class. The next Biowulf class will be in June or July.

If you are interested in attending tommorrow's class, please register at

http://helixweb.nih.gov/nih/classes/index.php

Class: NIH Biowulf Cluster: Scientific Computing Instructor: Steven Fellini / Susan Chacko (Helix/Biowulf staff) Date: Thu Feb 19, 2015 Time; 9 am - 4 pm Location: Bldg 12, Rm B51




2/23/2015

As part of the NIH Network modernization effort, maintenance on the Helix Systems Network will result in a brief loss of network connectivity to all Helix Systems at some point between 10 PM and 2 AM on Tuesday/Wednesday February 24/25 for 15-30 minutes.

During this time all network connections to the Helix Systems will terminate, and the Helix Systems will not be accessible.

The Helix Systems include helix.nih.gov, biowulf.nih.gov, helixweb.nih.gov, helixdrive.nih.gov, biospec.nih.gov and sciware.nih.gov.

Please send comments and questions to staff@helix.nih.gov




2/27/2015

On Tuesday March 3 starting at 5:30 am, system maintenance will be performed on the Helix Systems samba servers known as Helixdrive (helixdrive.nih.gov) which could last up to 90 minutes. The Helixdrive service allows users to mount or map the Helix/Biowulf home, data and scratch directories to their local systems.

During this maintenance, any existing user mounts to home, data or scratch directories will be terminated. Users will be unable to initiate new mounts until the maintenance has been completed.

This maintenance will not affect any other Helix/Biowulf services, including Helix and Biowulf login sessions, Biowulf cluster jobs, Helixweb applications or email.

Please contact the Helix Systems staff at staff@helix.nih.gov if you have any questions.




Up to Top

This page last reviewed: December 14, 2010