<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>NHR</title>
	<atom:link href="https://science.f4studio.de/feed/" rel="self" type="application/rss+xml" />
	<link>https://science.f4studio.de</link>
	<description>Nationales Hochleistungsrechnen</description>
	<lastBuildDate>Fri, 25 Aug 2023 11:45:11 +0000</lastBuildDate>
	<language>de</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>CFD day, 15 June 2023</title>
		<link>https://science.f4studio.de/cfd-day-15-june-2023/</link>
		
		<dc:creator><![CDATA[kulot]]></dc:creator>
		<pubDate>Thu, 01 Jun 2023 16:20:26 +0000</pubDate>
				<category><![CDATA[Neueste Beiträge]]></category>
		<guid isPermaLink="false">https://www.hlrn.de/?p=11441</guid>

					<description><![CDATA[Online course NHR@ZIB and NHR@GWDG]]></description>
										<content:encoded><![CDATA[<h2>Course announcement</h2>
<p id="page-title" class="title colwidth-four">CFD day<br />
June 15th 2023</p>
<p>lecturer: Lewin Stein (Zuse Institute Berlin), Jack Ogaja (GWDG), Immo Huismann (German Aerospace Center)<br />
location: online<br />
details and registration: <a href="http://www.hlrn.de/doc/display/PUB/CFD+Day+2023">workshop page</a></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Online course CFD, June 2023</title>
		<link>https://science.f4studio.de/online-course-cfd-june-2023/</link>
		
		<dc:creator><![CDATA[kulot]]></dc:creator>
		<pubDate>Thu, 01 Jun 2023 16:11:41 +0000</pubDate>
				<category><![CDATA[Veranstaltungen]]></category>
		<guid isPermaLink="false">https://www.hlrn.de/?p=11432</guid>

					<description><![CDATA[Online course: June 15 2023 CFD day Zuse-Institut Berlin, Gesellschaft für wissenschaftliche Datenverarbeitung Organisator: Dr. Lewin Sein, Dr. Jack Ogaja]]></description>
										<content:encoded><![CDATA[<p>Online course: June 15 2023<br />
<strong><a href="https://www.hlrn.de/doc/display/PUB/CFD+Day+2023" target="_blank" rel="noopener">CFD day</a><br />
</strong>Zuse-Institut Berlin, Gesellschaft für wissenschaftliche Datenverarbeitung<br />
Organisator: Dr. Lewin Sein, Dr. Jack Ogaja</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>System Grete in Göttingen is online</title>
		<link>https://science.f4studio.de/grete-online/</link>
		
		<dc:creator><![CDATA[kulot]]></dc:creator>
		<pubDate>Tue, 02 May 2023 12:30:06 +0000</pubDate>
				<category><![CDATA[Neueste Beiträge]]></category>
		<guid isPermaLink="false">https://www.hlrn.de/?p=11417</guid>

					<description><![CDATA[We’re happy to announce the beginning of regular user operation for our new GPU cluster, “Grete” in Göttingen.]]></description>
										<content:encoded><![CDATA[<h2>System Grete in Göttingen</h2>
<p class="part" data-startline="22" data-endline="22">We’re happy to announce the beginning of regular user operation for our new GPU cluster, “Grete” in Göttingen.</p>
<p class="part" data-startline="24" data-endline="24">The main part of the cluster is available via the new partition <code>grete</code>, consisting of 33 nodes equipped with 4 NVIDIA Tesla A100 40 GB GPUs, 2 AMD Epyc CPUs, and an Infiniband HDR interconnect. The <code>grete:shared</code> partition contains additionally two nodes with 8 A100 80 GB nodes each. All nodes have 16 CPU cores and 128GB memory per GPU. “Grete” has a dedicated new login node, <code>glogin9</code>, also available via its DNS alias <code>glogin-gpu.hlrn.de</code>.</p>
<p class="part" data-startline="26" data-endline="26">Another 3 GPU nodes are available in the partition <code>grete:interactive</code> for interactive usage (limited to 2 jobs per user). The <code>grete:preemptible</code> partition is available for backfilling these nodes. On these nodes, the GPUs are split via Multi-Instance GPU (MIG) into slices with 2 or 3 compute units each and 10 or 20 GB of GPU memory each, respectively. These slices can be requested like GPUs in Slurm. For example, <code>-G 2g.10gb:1</code> will allocate one slice with 2 compute units and 10 GB of memory. Preemptible jobs do not cost core h, but a compute project account has to be used, like for the <code>preempt</code> QoS in the CPU partitions.</p>
<p class="part" data-startline="28" data-endline="28">The default walltime limit on all <code>grete</code> partitions is 2 days.</p>
<p class="part" data-startline="30" data-endline="30">Part of “Grete” is a new dedicated flash-based WORK storage system mounted at <code>/scratch</code> on the new GPU nodes and <code>glogin9</code>. Each user and each compute project has a soft (hard) block quota of 3 TB (6 TB) and 1M (2M) inodes. The system is intended for fast access to the active data set required by the currently running jobs. The existing “Emmy” WORK file system is still reachable from the new cluster under <code>/scratch-emmy</code> via a long-distance connection. The HOME and PERM filesystems are shared between “Emmy” and “Grete”.</p>
<p class="part" data-startline="32" data-endline="33">The default CUDA version is 12.0, and the NVIDIA HPC SDK 23.3 is available via <code>nvhpc/23.3</code>, <code>nvhpc-byo-compiler/23.3</code>, <code>nvhpc-hpcx/23.3</code> and <code>nvhpc-nompi/23.3</code> modules.<br />
CUDA-enabled OpenMPI is available in the form of HPC-X Toolkit (<code>nvhpc-hpcx/23.3</code>) and the NVIDIA/Mellanox OFED stack (<code>openmpi-mofed/4.1.5a1</code>). However, previous OpenMPI versions will not provide CUDA support in combination with Infiniband!</p>
<p class="part" data-startline="35" data-endline="35">More information about using the new GPU system can be found in [1], and the accounting information has been extended to include the GPUs and MIG slices. [2] For example, in accordance with the recent round of compute time proposals, one full GPU node counts for the equivalent of 600 CPU cores.</p>
<p class="part" data-startline="37" data-endline="37">Please do not hesitate to contact us if you have questions or need support migrating suitable applications to the GPU system.</p>
<p class="part" data-startline="39" data-endline="39">The existing GPU nodes ggpu[01-03] with Nvidia V100 32GB GPUs will be migrated to the same site (“RZGö”) as “Grete” in mid-May. The operation will resume with the same “Rocky Linux 8” based OS image as the new GPU nodes and an Infiniband interconnect as part of the grete:shared, preemptible and interactive partitions.</p>
<p class="part" data-startline="41" data-endline="42">[1] <a href="https://www.hlrn.de/doc/display/PUB/GPU+Usage" target="_blank" rel="noopener">https://www.hlrn.de/doc/display/PUB/GPU+Usage</a><br />
[2] <a href="https://www.hlrn.de/doc/display/PUB/Accounting+in+Core+Hours" target="_blank" rel="noopener">https://www.hlrn.de/doc/display/PUB/Accounting+in+Core+Hours</a></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Parallel programming day, 27 April 2023</title>
		<link>https://science.f4studio.de/parallel-programming-day-27-april-2023/</link>
		
		<dc:creator><![CDATA[kulot]]></dc:creator>
		<pubDate>Thu, 06 Apr 2023 07:39:02 +0000</pubDate>
				<category><![CDATA[Neueste Beiträge]]></category>
		<guid isPermaLink="false">https://science.f4studio.de/?p=14893</guid>

					<description><![CDATA[Online course NHR@ZIB]]></description>
										<content:encoded><![CDATA[<h2>Course announcement</h2>
<p id="page-title" class="title colwidth-four">Parallel programming day<br />
April 27 2023</p>
<p>lecturer: Matthias Läuter, Lewin Stein (Zuse Institute Berlin)<br />
location: online<br />
language: German<br />
details and registration: <a href="https://www.zib.de/node/5683">workshop page</a></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Online course MPI, April 2023</title>
		<link>https://science.f4studio.de/online-course-mpi-april-2023/</link>
		
		<dc:creator><![CDATA[kulot]]></dc:creator>
		<pubDate>Thu, 06 Apr 2023 07:23:17 +0000</pubDate>
				<category><![CDATA[Veranstaltungen]]></category>
		<guid isPermaLink="false">https://science.f4studio.de/?p=14902</guid>

					<description><![CDATA[Online course: April 27 2023 Parallel programming day Zuse-Institut Berlin Organisator: Dr. Matthias Läuter]]></description>
										<content:encoded><![CDATA[<p>Online course: April 27 2023<br />
<b></b><strong><a href="https://www.zib.de/node/5683">Parallel programming day</a><br />
</strong>Zuse-Institut Berlin<br />
Organisator: Dr. Matthias Läuter</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>HPC course MPI OpenMP, March 2023</title>
		<link>https://science.f4studio.de/hpc-course-mpi-openmp-march-2023/</link>
		
		<dc:creator><![CDATA[kulot]]></dc:creator>
		<pubDate>Mon, 06 Mar 2023 12:49:29 +0000</pubDate>
				<category><![CDATA[Veranstaltungen]]></category>
		<guid isPermaLink="false">https://science.f4studio.de/?p=14906</guid>

					<description><![CDATA[Workshop: March 13-17 2023 Introduction to parallel programming with MPI and OpenMP Technische Universität Berlin, Zuse-Institut Berlin Organisator: Dr. Matthias Läuter]]></description>
										<content:encoded><![CDATA[<p>Workshop: March 13-17 2023<br />
<a href="http://www.tu.berlin/cfd/studium-lehre/datenverarbeitung/mpi" target="_blank" rel="noopener noreferrer"> <b>Introduction to parallel programming with MPI and OpenMP</b> </a><br />
Technische Universität Berlin, Zuse-Institut Berlin<br />
Organisator: Dr. Matthias Läuter</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>HPC course Berlin, March 2023</title>
		<link>https://science.f4studio.de/hpc-kurs-berlin-march-2023/</link>
		
		<dc:creator><![CDATA[kulot]]></dc:creator>
		<pubDate>Mon, 06 Mar 2023 12:47:31 +0000</pubDate>
				<category><![CDATA[Neueste Beiträge]]></category>
		<guid isPermaLink="false">https://www.hlrn.de/?p=11368</guid>

					<description><![CDATA[Course TU Berlin and NHR@ZIB]]></description>
										<content:encoded><![CDATA[<h2>Course Announcement</h2>
<p>Introduction to parallel programming with MPI and OpenMP<br />
March 13-17 2023</p>
<p>lecturer: Matthias Läuter<br />
location: Technische Universität Berlin and NHR at Zuse Institute Berlin<br />
language: German<br />
details and registration: <a href="http://www.tu.berlin/cfd/studium-lehre/datenverarbeitung/mpi">TU-Berlin</a></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Apply for computing time Oct 16th 2022</title>
		<link>https://science.f4studio.de/apply-for-computing-time-oct-16th-2022/</link>
		
		<dc:creator><![CDATA[kulot]]></dc:creator>
		<pubDate>Fri, 14 Oct 2022 12:43:02 +0000</pubDate>
				<category><![CDATA[Neueste Beiträge]]></category>
		<guid isPermaLink="false">https://science.f4studio.de/?p=14897</guid>

					<description><![CDATA[The NHR and HLRN sites NHR@ZIB and NHR@Göttingen are inviting project proposals applying for computing time on the HLRN-IV systems Emmy and Lise.]]></description>
										<content:encoded><![CDATA[<h2>Apply for computing time by October 16th 2022</h2>
<p>The NHR and HLRN sites NHR@ZIB and NHR@Göttingen are inviting project proposals applying for computing time on the HLRN-IV systems Emmy and Lise.</p>
<p>The next deadline is on October 16th 2022, at 23:59.</p>
<p>Resources are allocated for one year starting on January 1st 2023 on a quarterly basis after review of the project proposal, see www.hlrn.de/doc/display/PUB/Application+Process. Notifications will be sent out around end of December 2022.</p>
<p>We have slightly improved the template for the main and follow-up proposals in order to simplify using the whitelisting option. Instead of NPL you apply for CPU core hours (in steps of 1000) since 2022Q4. You can also apply for GPU resources, which we expect to be available at both sites from Q1.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>New Procedure for Computing Time July 28th 2022</title>
		<link>https://science.f4studio.de/apply-for-computing-time-jul-28th-2022/</link>
		
		<dc:creator><![CDATA[kulot]]></dc:creator>
		<pubDate>Wed, 13 Jul 2022 10:44:24 +0000</pubDate>
				<category><![CDATA[Neueste Beiträge]]></category>
		<guid isPermaLink="false">https://www.hlrn.de/?p=11313</guid>

					<description><![CDATA[The NHR and HLRN sites NHR@ZIB and NHR@Göttingen are inviting project proposals.]]></description>
										<content:encoded><![CDATA[<h2>New Procedure for Computing Time by July 28th, 2022</h2>
<p>The NHR and HLRN sites NHR@ZIB and NHR@Göttingen are inviting project proposals applying for computing time on the HLRN-IV systems Emmy and Lise.</p>
<p>The next deadline is on July 28th, 2022, at 23:59.</p>
<p>Resources are allocated for one year starting on October 1st 2022 on a quarterly basis after review of the project proposal, see https://www.hlrn.de/doc/display/PUB/Application+Process.</p>
<p>Notifications will be sent out around end of September 2022.</p>
<p>Since the two HLRN sites in Berlin and Göttingen have joined the &#8220;NHR Verein&#8221; [1] we had to implement changes to the process in order to contribute to the goal of aligning and streamlining application processes within NHR. The most important changes you find on the webpage https://www.hlrn.de/doc/display/PUB/News+for+proposals.</p>
<p>[1] https://www.nhr-verein.de/</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Apply for Computing Time Apr 28th, 2022</title>
		<link>https://science.f4studio.de/apply-for-computing-time-apr-28th-2022/</link>
		
		<dc:creator><![CDATA[kulot]]></dc:creator>
		<pubDate>Fri, 08 Apr 2022 13:51:25 +0000</pubDate>
				<category><![CDATA[Neueste Beiträge]]></category>
		<guid isPermaLink="false">https://www.hlrn.de/?p=11258</guid>

					<description><![CDATA[The Scientific Board of the HLRN is inviting project proposals applying for computing time on the HLRN system. The next deadline is on Apr 28th, 2022, at 23:59.]]></description>
										<content:encoded><![CDATA[<h2>Apply for Computing Time by April 28th, 2022</h2>
<p>The Scientific Board of the HLRN (&#8220;Wissenschaftlicher Ausschuss&#8221;) is inviting project proposals (&#8220;Großprojektantrag&#8221;) applying for computing time on the HLRN system. The next deadline is on April 28th, 2021, at 23.59.</p>
<p>Resources are allocated for one year starting on July 1st 2022 on a quarterly basis after review of the proposal, see www.hlrn.de/doc/display/PUB/Application+Process. Notifications will be sent out around end of June 2022.</p>
<p>Please note:<br />
For the project abstract the LaTeX template is required to be used, see www.hlrn.de/doc/display/PUB/Project+proposal.</p>
<p>Please contact your HLRN project consultant before submitting the proposal.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
