2025-04-17 Matthias Gobbert:
I attended (online) the Getting Started with Chip training.
– They will have standard software for Python like matplotlib, numpy, etc.
–> Which other ones? Can someone see a list of more ‘standard’ ones that are there?
– You are encouraged to use virtual environments for Python.
–> Is this the correct phrase? Someone with experience with this, can you provide an example?
– They can help by “office hours” [which I understand are best arranged by filing a ticket first and agreeing when/where to meet, instead of just walking in].
2025-04-16: I am trying to use this page under CIRC to collect our more detailed and practical information how to use chip.
(0) Please study the DoIT documentation yourself! Three places:
(a) hpcf.umbc.edu -> Compute tab -> Overview (has table that describes all portions of chip)
(a) hpcf.umbc.edu -> Compute tab -> slurm:chip-cpu (has table with exact list which nodes c?? are in which partition)
(b) hpcf.umbc.edu -> Compute tab -> User Documentation -> slurm (about in middle of the table-of-contents in the main part of the screen).
(1) Please, confirm that you can log in to chip.rs.umbc.edu for instance using PuTTy from Windows or a terminal/shell from Mac or Linux.
The new chip cluster lives behind the UMBC firewall, so either (i) run VPN first or (ii) you need a Duo push to log in. I recommend to use a VPN, since that will take care of all connection issues, such as from multiple shells, WinSCP, and more.
Note: Instructions and download links for the UMBC GlobalProtect VPN for Windows and macOS can be found here: https://umbc.atlassian.net/wiki/spaces/faq/pages/30754220/Getting+Connected+with+the+UMBC+GlobalProtect+VPN
For Linux users, please refer to the official instructions provided by Palo Alto Networks:
https://docs.paloaltonetworks.com/globalprotect/6-2/globalprotect-app-user-guide/globalprotect-app-for-linux/use-the-globalprotect-app-for-linux
You can download the zipped .tar
file for the Linux client here:
https://drive.google.com/file/d/1cKFRjv8bt0JQ0h_eS2kXLQhtQbbDfZs7/view?usp=sharing
(2) The home directory may be very bare. In particular, DoIT is not creating symbolic links any more. Enter the command “alias” to see all aliases and notice how something like “gobbert_user” is now an alias for “cd gobbert_user”, as if gobbert_user were a symbolic link. To get our old behavior with links back, just do yourself
ln -s /umbc/rs/gobbert/common/ gobbert_common
ln -s /umbc/rs/gobbert/users/gobbert/ gobbert_user
ln -s /umbc/rs/gobbert/group_saved/ gobbert_saved
(3) The startup file is .bashrc in the home directory. I copied some material from taki’s .bashrc over, and please add also the line “umask 077”. It seems that this umask setting is not provided any more. We need to research this more, as the behavior is not clear to me. Anyway, my .bashrc on chip looks like as follows, keep reading for more discussion about it:
# .bashrc # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi PATH=$PATH:/home/$USER export PATH module load gcc module load slurm # User specific aliases and functions umask 007 set -o noclobber # set -o rmstar equivalent to this in bash? set -o notify alias ll='ls -lF' alias lt='ls -ltF' alias lr='ls -ltrF' alias mv='mv -i' alias cp='cp -i' alias h='history 20' alias interactive-srun-match='srun --cluster=chip-cpu --account=pi_gobbert --partition=match --qos=shared --time=7:00:00 --mem=16G --pty $SHELL' alias interactive-srun-2018='srun --cluster=chip-cpu --account=pi_gobbert --partition=2018 --qos=medium --time=7:00:00 --mem=16G --pty $SHELL' alias interactive-salloc-match='salloc --cluster=chip-cpu --account=pi_gobbert --partition=match --qos=shared --time=7:00:00 --mem=16G' alias interactive-salloc-2018='salloc --cluster=chip-cpu --account=pi_gobbert --partition=2018 --qos=medium --time=7:00:00 --mem=16G' alias load-latex='module load texLive/2025' export TEXINPUTS=/umbc/rs/gobbert/group_saved/soft/tex/inputs:.: export BSTINPUTS=/umbc/rs/gobbert/group_saved/soft/tex/inputs:.: export BIBINPUTS=/umbc/rs/gobbert/group_saved/soft/tex/biblio/curr:
(4) With the user/login/edge node on chip being virtual, we should use an interactive shell on a compute node even for simple tasks, one being LaTeX for instance, which cannot be done on the user node any more. Notice that I defined two interactive srun aliases above. These will be available after logging in to chip. “interactive-2018” should be sufficient for light-weight tasks including compiling.
To load the module for LaTeX’s pdflatex command, note my alias “load-latex” above in my .bashrc shell. I use that after I have an interactive shell on a compute node.
(5) How to code .cache needs to be explained. That is .cache for VSCode. Which other one? For Python?
(6) Geant4 version 10.7.3 has been installed at the following path:
/umbc/rs/gobbert/common/research/geant4/Geant4.10.7/geant4.10.07
To enable Geant4 in your environment, please add the following lines to your .bashrc
file:
source /umbc/rs/gobbert/common/research/geant4/Geant4.10.7/geant4.10.07/bin/geant4.sh
source /umbc/rs/gobbert/common/research/geant4/Geant4.10.7/geant4.10.07/share/Geant4-10.7.3/geant4make/geant4make.sh
To verify that Geant4 has been successfully loaded, run the following command:
(base) [ehsans1@c21-16 ~]$ geant4-config –version
10.7.3
(base) [ehsans1@c21-16 ~]$
If you see the version number 10.7.3
as shown above, it confirms that Geant4 is correctly loaded in your environment on the cluster.