Skip over global navigation links

Helix Systems

2/18/2015
Due to some last-minute dropouts, there are a few spots available in tomorrow's Biowulf class. The next Biowulf class will be in June or July.

If you are interested in attending tommorrow's class, please register at

http://helixweb.nih.gov/nih/classes/index.php

Class: NIH Biowulf Cluster: Scientific Computing Instructor: Steven Fellini / Susan Chacko (Helix/Biowulf staff) Date: Thu Feb 19, 2015 Time; 9 am - 4 pm Location: Bldg 12, Rm B51




2/23/2015

As part of the NIH Network modernization effort, maintenance on the Helix Systems Network will result in a brief loss of network connectivity to all Helix Systems at some point between 10 PM and 2 AM on Tuesday/Wednesday February 24/25 for 15-30 minutes.

During this time all network connections to the Helix Systems will terminate, and the Helix Systems will not be accessible.

The Helix Systems include helix.nih.gov, biowulf.nih.gov, helixweb.nih.gov, helixdrive.nih.gov, biospec.nih.gov and sciware.nih.gov.

Please send comments and questions to staff@helix.nih.gov




2/27/2015

On Tuesday March 3 starting at 5:30 am, system maintenance will be performed on the Helix Systems samba servers known as Helixdrive (helixdrive.nih.gov) which could last up to 90 minutes. The Helixdrive service allows users to mount or map the Helix/Biowulf home, data and scratch directories to their local systems.

During this maintenance, any existing user mounts to home, data or scratch directories will be terminated. Users will be unable to initiate new mounts until the maintenance has been completed.

This maintenance will not affect any other Helix/Biowulf services, including Helix and Biowulf login sessions, Biowulf cluster jobs, Helixweb applications or email.

Please contact the Helix Systems staff at staff@helix.nih.gov if you have any questions.




3/10/2015

At approximately noon yesterday, Monday Mar 9, the Biowulf cluster had a network problem that caused some nodes to temporarily lose connections to the fileservers. Some running jobs were affected.

If you had jobs running at noon yesterday, please check their status. If 'jobload' shows the jobs as stalled, i.e. 0% cpu, you should restart the jobs. You should also check output files to make sure that the jobs are continuing to produce output.

Apologies for the problem. Please contact staff@biowulf.nih.gov if you have questions.

Helix/Biowulf staff.




3/11/2015

On Monday March 16 starting at 9:00 pm, system maintenance will be performed on the Helix Systems samba servers known as Helixdrive (helixdrive.nih.gov) which could last up to 3 hours. The Helixdrive service allows users to mount or map the Helix/Biowulf home, data and scratch directories to their local systems.

During this maintenance, any existing user mounts to home, data or scratch directories will be terminated. Users will be unable to initiate new mounts until the maintenance has been completed.

This maintenance will not affect any other Helix/Biowulf services, including Helix and Biowulf login sessions, Biowulf cluster jobs, Helixweb applications or email.

Please contact the Helix Systems staff at staff@helix.nih.gov if you have any questions.




3/11/2015

The Computer Vison System Toolbox for Matlab is available as a trial installation on the Helix systems. This toolbox provides algorithms, functions and apps for the design and simulation of computer vision and video processing systems. A user can perform object detection and tracking, feature matching and other video related tasks.

There are 5 trial licenses and the toolbox is available on helix/biowulf until April 8, 2015. We are considering purchasing this product if there is sufficient interest from our user community.

For more information and instructions on using, see

http://www.mathworks.com/help/vision/index.html




3/23/2015

The Parallel Computing Toolbox for Matlab is now available on the Helix systems. This toolbox lets you solve computationally and data-intensive problems using multicore processors, GPUs and computer clusters. For instructions on how to use the toolbox on the NIH Biowulf cluster, see

http://biowulf.nih.gov/apps/matlabdct.html

NOTE: Currently, only one Parallel (Distributed) Computing Toolbox license has been purchased, so jobs are limited to one node and 12 cores on the Biowulf cluster. A workaround would be to use the Matlab Compiler to compile your parallel code, then submit it as a batch job.




3/25/2015

The Helix/Biowulf staff will present two classes in April 2015. Both classes will be hands-on.

Classes are free but registration is required. Helix/Biowulf users will have priority enrollment, and new users are highly encouraged to attend.

Introduction to Linux ---------------------- Date & Time: Tuesday 21 Apr, 9 am - 4 pm Location: Bldg 12A, Rm B51. This class is intended as a starting point for individuals new to Linux and UNIX. The class will center on basic UNIX/Linux concepts: logging in, navigating the file system, commands for interacting with files, running and viewing processes, checking disk space and other common tasks. The class will also cover the use of some services specific to Helix/Biowulf usage.

Bash Shell Scripting for Helix and Biowulf ------------------------------------------ Date & Time: Tuesday 28 Apr, 9 am - 4 pm Location: Bldg 12A, Rm B51. The default shell on many Linux systems is bash. Bash shell scripting provides a method for automating common tasks on Linux systems (such as Helix and Biowulf), including transferring and parsing files, creating qsub and swarm scripts, pipelining tasks, and monitoring jobs. This class will give a hands-on tutorial on how to create and use bash shell scripts in a Linux environment.

Registration for both classes is available at http://helixweb.nih.gov/nih/classes/index.php




4/30/2015

As announced in an April 23rd email to NIH IRP scientists, a significant effort is underway to increase the capacity of High Performance Computing at NIH.

Hardware for Phase I of the expansion has been installed and is currently undergoing testing and configuration by the Biowulf staff.

What does this mean for Biowulf users?

-- One petabyte of storage has already come online. -- The addition of over 8,000 compute cores, including 3,000 cores connected to 56 Gb/s Infiniband fabrics. -- Standard memory sizes of 128 GB/node with four nodes offering 1 TB of memory. -- 48 latest generation GPUs. -- 10/40 Gb/s networking from storage to compute nodes to increase i/o throughput of large data sets. -- 100 Gb/s connections to the NIH core network for increased aggregate bandwidth for file transfers. -- A second tier of storage offering super-high durability. -- An advanced batch queuing system. -- Additional staff to support systems and users

Look for additional information on changes to Biowulf in the weeks ahead.

See http://biowulf.nih.gov/b2-status for the text of the announcement mentioned above and for the latest news on the Biowulf Cluster modernization.

The Biowulf Staff




5/12/2015

Galaxy at Helix Systems will be discontinued as of June 1, 2015. Galaxy is a service that requires a continuous and open-ended amount of time and effort to maintain, which has become unjustifiable given its usage. Since Galaxy tools and data are not keeping pace with available updates, users would be better served using the commandline tools already available on Biowulf. In addition, the transitions coming to Biowulf and its batch system make it unfeasible to integrate Galaxy as part of those improvements.

Users are encouraged to finish all work on Galaxy and download any desired data prior to the June 1 shutdown. Datasets held in Galaxy will not be recoverable after the shutdown.

Helix Systems Staff




Up to Top

This page last reviewed: December 14, 2010