f
The main purpose of the HPC laboratory is to implement the new technologies and parallel computing systems in the training and research process for the development of parallel algorithms and their execution on local cluster systems and even on regional HPC Grid or Cloud systems.
52 x 2.2 GHz CPU (AMD), 68 GB RAM
OpenMPI, OpenMP, MPI, C, C++, F77, F90, ScaLAPACK, PETSc, Mathematica, Grid mathematica
100 x 2.6 GHz CPU (Intel), 272 GB RAM
OpenMPI, OpenMP, MPI, C, C++, F77, F90, Python, NumPy, ScaLAPACK, PETSc
boris.hancu @ gmail.com
elena.calmis @ gmail.com
HPC Lab
At the moment, the HPC laboratory is equipped with parallel computing systems for the training and research process, such as:
- The HPC Cluster (hpc.usm.md) – involved since 2007 in the training and research process of the Faculty of Mathematics and Informatics. The cluster has a total of 52 x 2.2 GHz CPU (AMD) and 68 GB RAM.
- Cloud HPC Cluster (hpc2.usm.md) – launched in 2022 within the research project "Investigation and development of the integrated infrastructure of the unified Cloud Computing environment to support open science", project director Hâncu Boris, associate professor. The cluster has a total of 100 x 2.6 GHz CPU (Intel) and 272 GB RAM.
- Workstations – 18 modern PCs for teachers and students.
- Interactive whiteboard for video conferencing
HPC software for
libraries and routines for linear algebra, partial differential equations, technical computing, message passing interface and more...
ScaLAPACK (Scalable Linear Algebra PACKage) is a library of high-performance linear algebra routines for parallel distributed memory machines.
Portable, Extensible Toolkit for Scientific Computing - a suite of data structures and routines for parallel solutions of partial differential equations.
Wolfram Mathematica provides a single integrated, continually expanding system that covers the breadth and depth of technical computing.
The message passing interface (MPI) is a standardized means of exchanging messages between multiple computers running a parallel program across distributed memory. The term itself typically refers to the sending of a message to an object, parallel process, subroutine, function or thread, used to start another process.
OpenMP is an application programming interface (API) that supports multi-platform shared-memory multiprocessing programming in C, C++ and Fortran, on many platforms and operating systems. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior.
MPI for Python provides Python bindings for the Message Passing Interface (MPI) standard, allowing programmers to exploit multiple processor computing systems. NumPy offers comprehensive mathematical functions, random number generators, linear algebra routines, Fourier transforms, and more.
HPC Courses
parallel algorithms on distributed and shared memory systems
parallel algorithms on distributed memory systems
Parallel computing systems
Parallel programming models
MPI on distributed memory computing systems
Dynamic generation of MPI processes
parallel algorithms on shared memory systems
OpenMP directives
Data management in OpenMP
Mixed MPI-OpenMP programming
Summa and Canon algorithm
Data Distributions and Software Conventions
Basic BLACS communication routine
Using PDGEADD for Data Paralization
Parallel Basic Linear Algebra Subprograms
The last project "Investigation and elaboration the integrated infrastructure of the unified environment “cloud computing” to support open science" will allow to create a Cloud service oriented on variety types of jobs and suitable for the support of open science using multiprocessor clusters of the MSU, RENAM and the Institute of Mathematics and Informatics. For more information click HERE.
PROJECTS COMPLETED
AWARDS
CUSTOMER ACTIVE
GOOD REVIEWS
Mateevici 60 str., 4th block, cab. 239, Moldova State University, Chisinau, MD-2002
(+373) 67 560 048
boris.hancu @ gmail.com
10:00 - 16:00
MSU Cloud HPC Cluster © 2024