Page Content
Scientific Computing
The High Performance Computing cluster is available to scientists at the TU Berlin and offers about 3800 CPU cores and 46 GPUs. It can reach a peak performance of more than 348 trillion floating point operations per second, in total. With its architecture specifically tailored to data-intensive computing the cluster provides ideal conditions to achieve the ambitious research goals of the users during the scheduled operating time.
Access to cluster
Requirement for accessing is a TUB-Account, which is granted to every member of the TU Berlin by provisioning. For authorization a role administrator of a department must create a team in order to maintain permissions. Please ensure enabling group feature for the team. Every member will be inserted into the SLURM database after Michael Flachsel was notified about the team's name.
Login to the cluster is done via SSH connection to frontend servers (frontend-01 and frontend-02). In order to connect use the virtual address gateway.hpc.tu-berlin.de. The address is reachable within TUB subnetwork. For authentication (username/password) the TUB-Account is used.
Microsoft Windows requires additional software for accessing the cluster. Putty or MoabTerm are recommended.
Linux Systems can use built-in SSH to connect. Within the TUB subnetwork execute following command in a terminal shell:
ssh [TUB account name]@gateway.hpc.tu-berlin.de
Outside of the TUB subnetwork connecting requires either a VPN (Virtual Private Network) or the usage of the SSH jumphost. How to establish a VPN with TUB subnetwork is explained here. The jumphost is accessible via:
ssh [TUB account name]@sshgate.tu-berlin.de
From that point you can proceed to ssh gateway (see above).
Access to cluster data storage (BeeGFS) is available by scp or rsync (Linux), Windows systems will need additional software which complies with the scp/ssh standard. This storage also contains the user files. Use sshfs in order to access files directly (other methods will always produce copies on the remote system).
Help & documentation
Documentation for users is available at the HPC-Wiki. There you can find information about hardware, available software, to batch commands (how to start a programm) and notes regarding the use of gpu units. In order to access restricted areas within the wiki use your TUB account information to log in.
Calls can be answered within operation times of the ZE Campusmanagement (Mo-Do 10-16, Fr 10-15). You may also contact us via mail. For phone numbers or email contact see the service information at the right of the screen.
A forum for cluster users can be found at the special ISIS course (Key: rechenknecht).
Hardware
Count | Art | CPU | # | Memory | GPU | # |
---|---|---|---|---|---|---|
132 | Standard | Xeon E5-2630v4 | 2 | 256 GiB | - | 0 |
20 | GPU | Xeon E5-2630v4 | 2 | 512 GiB | Tesla P100 | 2 |
1 | Nachgerüstet | Xeon E5-2630v4 | 2 | 256 GiB | Tesla P100 | 2 |
26 | Nachrüstbar | Xeon E5-2630v4 | 2 | 256 GiB | - | 0 |
1 | SMP | Xeon E7-4850v4 | 4 | 3 TiB | - | 0 |
2 | SMP-GPU | Xeon E7-4850v4 | 4 | 3 TiB | Tesla P100 | 2 |
Count Servers | Drives per Server | Drive Size | Storage Size (total) | Storage Size (useable) |
---|---|---|---|---|
4 | 60 | 6 TB | 1440 TB | 1037 TB |
CPU | Frequency | Cores | Threads/Core | FLOP/Circle |
---|---|---|---|---|
Intel Xeon E5-2630v4 | 2.2 GHz | 10 | 2 | DP/SP*: 16/32 |
Intel Xeon E7-4850v4 | 2.1 Ghz | 16 | 2 | DP/SP*: 16/32 |
* double precision / single precision
Zusatzinformationen / Extras
Quick Access:
Auxiliary Functions
Room EN-K048
+49 (0)30 314 74591
+49 (0)30 314 74592
+49 (0)30 314 74593
e-mail query
Website