Center for Computational Molecular Science and Technology Georgia Institute of Technology Center for Computational Molecular Science and Technology School of Chemistry and Biochemistry

CCMST Weekly News, March 10, 2011

March 10, 2011 11:11 pm EST

1. Announcements
2. Statistics
3. Tip of the Week

ANNOUNCEMENTS

Upcoming Seminars

March 14, 2011 11:00 AM – 12:00 PM
MoSE 3201A
Prof. Christopher Jaroniec, The Ohio State University
Atomic-resolution studies of protein structure and dynamics by magic-angle spinning solid-state NMR spectroscopy

March 15, 2011 4:00 PM – 5:00 PM
MoSE 3201A
Prof. John Asbury, Pennsylvania State University
Pathways to More Efficient Organic Solar Cells: what we can learn by watching electrons move in real time

March 17, 2011 4:00 PM – 5:00 PM
MoSE G011
Prof. Gary Schuster, Georgia Tech
Charge Transport in DNA: Oxidative Damage and Self-Organizing Conducting Polymers

April 11, 2011 11:00 AM – 12:00 PM
MoSE 3201A
Prof. Enrico Clementi, University of Insubria, Italy
With computers from atoms to macromolecular systems

NOTE: Planned Power Outage Postponed

Please note that the power outage for the MS&E building, originally planned for March 26-27 2001 has been postponed until later in the spring semester. Date to be determined.

STATISTICS

FGATE

Uptime: 56 days
/home directory usage: 74% (1.5 TB available)
/backups directory usage: 87%

LSF usage for Week 9 (2/28-3/6) (times are in minutes)
GroupJobsTotal CPUAvg CPUAvg WaitAvg Trnr.
Bredas 35 82926 4% 2369 1233 2731
Hernandez 406 391676 20% 965 524 1574
Sherrill 1623 282542 15% 174 163 340
Other 8 25607 1% 3201 1 3205
Total 2072 782750 40% 378 251 634

Note: percentages refer to the total CPU time available for the period.

Most productive user of the Week: loriab 244367.

LSF usage for Month of February (times are in minutes)
GroupJobsTotal CPUAvg CPUAvg WaitAvg Trnr.
Bredas 3109 915294 12% 294 462 695
Hernandez 1549 2540412 33% 1640 583 3338
Sherrill 4486 1084284 14% 242 1283 1530
Other 147 288836 4% 1965 207 2198
Total 9291 4828686 62% 520 874 1562

Note: percentages refer to the total CPU time available for the period.

EGATE

Uptime: 3 days
/theoryfs/common directory usage: 41% (3954B available)
/theoryfs/ccmst directory usage: 89% (102 GB available)

LSF usage for Week 8 (2/21-2/27) (times are in minutes)
GroupJobsTotal CPUAvg CPUAvg WaitAvg Trnr.
Bredas 6 59409 4% 9901 71 10061
Hernandez 662 840507 56% 1270 2076 6974
Sherrill 742 109911 7% 148 142 291
Other 234 11350 1% 49 41 89
Total 1644 1021176 68% 621 906 2989

Note: percentages refer to the total CPU time available for the period.

Most productive user of the Week: hagy 524558.

LSF usage for Week 9 (2/28-3/6) (times are in minutes)
GroupJobsTotal CPUAvg CPUAvg WaitAvg Trnr.
Bredas 40 19931 1% 498 4 513
Hernandez 384 728839 48% 1898 860 2958
Sherrill 231 56543 4% 245 345 601
Other 318 167127 11% 526 106 758
Total 973 972440 64% 999 456 1579

Note: percentages refer to the total CPU time available for the period.

Most productive user of the Week: galen 455634.

LSF usage for Month of February (times are in minutes)
GroupJobsTotal CPUAvg CPUAvg WaitAvg Trnr.
Bredas 20 132801 2% 6640 38 6731
Hernandez 2564 3011224 50% 1174 965 3146
Sherrill 3723 997977 17% 268 1187 1456
Other 469 17082 0% 36 33 71
Total 6776 4159073 69% 614 1019 2015

Note: percentages refer to the total CPU time available for the period.

TIP OF THE WEEK

By Massimo

Working With Ggate: Job Script Syntax
Action Syntax Default Notes
Processor Request
 #PBS -l nodes=4:ppn=1
 #PBS -l nodes=1:ppn=1
Only the total number of processors will be honored by the queue. Processes are packed whenever possible
Memory Request
 #PBS -l pmem=2000mb
 #PBS -l pmem=8000mb
The default is rather high. Memory requests are enforced. Jobs exceeding 105% of memory request are killed.
Wall Clock Request
 #PBS -l walltime=GG:HH:MM:SS
Infinite With the default request jobs are not eligible for backfilling. Wall clock time is enforced. After 120% of the time is passed, the job is terminated
I/O Intensive Jobs
 #PBS -l nodes=1:ppn=1:gres=lscr
The lscr attribute apply to each process requested by the job. No more than two I/O intensive processes are allowed on the same node. Use only for I/O intensive post HF jobs
Define Job Name
 #PBS -N jobname
Name of submission script This is the job name displayed by the qstat and showq commands
Define Output Name
 #PBS -o path/to/outfile
job_name.osequence_number This is the output of the queue system. The output of the calculation can be redirected inside the job script
Join Output and Error Files
 #PBS -j oe
Separated output and error files
Define e-mail Address
 #PBS -M user@host
submitting_user@submitting_host One or more email addresses where to send messages about the job
Set e-mail Options
 #PBS -m nbae
#PBS -m a
n: send no mail; b: send message when job start running; e: at job exit; a: send message if jobs aborts
Define Environment Variable
 #PBS -v VARIABLE=value
Could be a comma separated list of variable names and values
Export Current Environment to Job
 #PBS -V
Only few variables (HOME, PATH, and few others) are exported

Do you have usage tips that you want to share with the other CCMST users? Please send them to Massimo (massimo.malagoli@chemistry.gatech.edu) for inclusion in the Tip of the Week section.