Browse Source

begin processing todos for chapter 3, fix bad memory throughput calculation resulting in more todos for adaption of evaluation

master
Constantin Fürst 10 months ago
parent
commit
49d5a61a2c
  1. BIN
      thesis/bachelor.pdf
  2. 76
      thesis/content/30_performance.tex

BIN
thesis/bachelor.pdf

76
thesis/content/30_performance.tex

@ -1,11 +1,7 @@
\chapter{Performance Microbenchmarks} \chapter{Performance Microbenchmarks}
\label{chap:perf} \label{chap:perf}
In this chapter, we measure the performance of the \gls{dsa}, with the goal to determine an effective utilization strategy to apply the \gls{dsa} to \gls{qdp}. In Section \ref{sec:perf:method} we lay out our benchmarking methodology, then perform benchmarks in \ref{sec:perf:bench} and finally summarize our findings in \ref{sec:perf:analysis}. \par
The performance of \gls{dsa} has been evaluated in great detail by Reese Kuper et al. in \cite{intel:analysis}. Therefore, we will perform only a limited amount of benchmarks with the purpose of verifying the figures from \cite{intel:analysis} and analysing best practices and restrictions for applying \gls{dsa} to \gls{qdp}. \par
\todo{reformulate}
In this chapter, we measure the performance of the \gls{dsa}, with the goal to determine an effective utilization strategy to apply the \gls{dsa} to \gls{qdp}. In Section \ref{sec:perf:method} we lay out our benchmarking methodology, then perform benchmarks in \ref{sec:perf:bench} and finally summarize our findings in \ref{sec:perf:analysis}. As the performance of the \gls{dsa} has been evaluated in great detail by Reese Kuper et al. in \cite{intel:analysis}, we will perform only a limited amount of benchmarks with the purpose of determining behaviour in a multi-socket system, penalties from using \gls{intel:dml} and throughput for transfers from \glsentryshort{dram} to \gls{hbm}. \par
\section{Benchmarking Methodology} \section{Benchmarking Methodology}
\label{sec:perf:method} \label{sec:perf:method}
@ -17,20 +13,14 @@ The performance of \gls{dsa} has been evaluated in great detail by Reese Kuper e
\label{fig:perf-xeonmaxnuma} \label{fig:perf-xeonmaxnuma}
\end{figure} \end{figure}
The benchmarks were conducted on a dual-socket server equipped with two Intel Xeon Max 9468 CPUs, each with 4 nodes that have access to 16 GiB of \gls{hbm} and 12 cores. This results in a total of 96 cores and 128 GiB of \gls{hbm}. The layout of the system is visualized in Figure \ref{fig:perf-xeonmaxnuma}. For configuring it, we follow Section \ref{sec:state:setup-and-config}. \par
\todo{refer to intel ark as cite}
As \gls{intel:dml} does not have support for \glsentryshort{dsa:dwq}s, we run benchmarks exclusively with access through \glsentryshort{dsa:swq}s. The application written for the benchmarks can be obtained in source form under the directory \texttt{benchmarks} in the thesis repository \cite{thesis-repo}. \par
The benchmarks were conducted on a dual-socket server equipped with two Intel Xeon Max 9468 CPUs, each with 4 \glsentryshort{numa:node}s that have access to 16 GiB of \gls{hbm} and 12 cores. This results in a total of 96 cores and 128 GiB of \gls{hbm}. The layout of the system is visualized in Figure \ref{fig:perf-xeonmaxnuma}. For configuring it, we follow Section \ref{sec:state:setup-and-config}. \cite{intel:xeonmax-ark} \par
\todo{refer back to state 2.4 where this is mentioned}
As \gls{intel:dml} does not have support for \glsentryshort{dsa:dwq}s (see Section \ref{sec:state:dml}), we run benchmarks exclusively with access through \glsentryshort{dsa:swq}s. The application written for the benchmarks can be obtained in source form under the directory \texttt{benchmarks} in the thesis repository \cite{thesis-repo}. \par
The benchmark performs node setup as described in Section \ref{sec:state:dml} and allocates source and destination memory on the nodes passed in as parameters. To avoid page faults affecting the results, the entire memory regions are written to before the timed part of the benchmark starts. We therefore also do not use \enquote{.block\_on\_fault()}, as we did for the memcpy-example in Section \ref{sec:state:dml}. \par
The benchmark performs \gls{numa:node} setup as described in Section \ref{sec:state:dml} and allocates source and destination memory on the \gls{numa:node}s passed in as parameters. To avoid page faults affecting the results, the entire memory regions are written to before the timed part of the benchmark starts. We therefore also do not use \enquote{.block\_on\_fault()}, as we did for the memcpy-example in Section \ref{sec:state:dml}. \par
Timing in the outer loop may display lower throughput than actual. This is the case, should one of the \gls{dsa}s participating in a given task finish earlier than the others. We decided to measure the maximum time and therefore minimum throughput for these cases, as we want the benchmarks to represent the peak achievable for distributing one task over multiple engines and not executing multiple tasks of a disjoint set. As a task can only be considered complete when all subtasks are completed, the minimum throughput represents this scenario. This may give an advantage to the peak CPU throughput benchmark we will reference later on, as it does not have this restriction placed upon it. \par Timing in the outer loop may display lower throughput than actual. This is the case, should one of the \gls{dsa}s participating in a given task finish earlier than the others. We decided to measure the maximum time and therefore minimum throughput for these cases, as we want the benchmarks to represent the peak achievable for distributing one task over multiple engines and not executing multiple tasks of a disjoint set. As a task can only be considered complete when all subtasks are completed, the minimum throughput represents this scenario. This may give an advantage to the peak CPU throughput benchmark we will reference later on, as it does not have this restriction placed upon it. \par
\todo{mention configuration used, potential for futurework: evaluate multiengine per group performance}
\begin{figure}[!t] \begin{figure}[!t]
\centering \centering
\includegraphics[width=0.35\textwidth]{images/nsd-benchmark.pdf} \includegraphics[width=0.35\textwidth]{images/nsd-benchmark.pdf}
@ -38,9 +28,7 @@ Timing in the outer loop may display lower throughput than actual. This is the c
\label{fig:benchmark-function:outer} \label{fig:benchmark-function:outer}
\end{figure} \end{figure}
To get accurate results, the benchmark is repeated 10 times. Each iteration is timed from beginning to end, marked by yellow in Figure \ref{fig:benchmark-function:outer}. For small task sizes, the iterations complete in a very short amount of time, which can have adverse effects on the results. Therefore, we repeat the code of the inner loop for a configurable amount, virtually extending the duration of a single iteration for these cases. \par
\todo{use concrete amount instead of "configurable"}
To get accurate results, the benchmark is repeated \(10\) times. Each iteration is timed from beginning to end, marked by yellow in Figure \ref{fig:benchmark-function:outer}. For small task sizes, the iterations complete in a very short amount of time, which can have adverse effects on the results. Therefore, we repeat the code of the inner loop for a configurable amount, virtually extending the duration of a single iteration for these cases. The chosen internal repetition count is \(10.000\) for transfers in the range of \(1-8\ KiB\), \(1.000\) for \(1\ MiB\) and one for larger instances. \par
\begin{figure}[!t] \begin{figure}[!t]
\centering \centering
@ -68,52 +56,42 @@ We anticipate that single submissions will consistently yield poorer performance
\begin{figure}[!t] \begin{figure}[!t]
\centering \centering
\includegraphics[width=0.5\textwidth]{images/plot-submitmethod.pdf} \includegraphics[width=0.5\textwidth]{images/plot-submitmethod.pdf}
\caption{Throughput for different Submission Methods and Sizes. Performing a copy with source and destination being node 0, executed by the \glsentryshort{dsa} on node 0. Observable is the submission cost which affects small transfer sizes differently, as there the completion time is lower.}
\caption{Throughput for different Submission Methods and Sizes. Performing a copy with source and destination being \glsentryshort{numa:node} 0, executed by the \glsentryshort{dsa} on \glsentryshort{numa:node} 0. Observable is the submission cost which affects small transfer sizes differently, as there the completion time is lower.}
\label{fig:perf-submitmethod} \label{fig:perf-submitmethod}
\end{figure} \end{figure}
In Figure \ref{fig:perf-submitmethod} we conclude that with transfers of 1 MiB and upwards, the submission method makes no noticeable difference. For smaller transfers the performance varies greatly, with batch operations leading in throughput. Reese Kuper et al. observed that \enquote{SWQ observes lower throughput between 1-8 KB [transfer size]} \cite[pp. 6]{intel:analysis}. We however observe a much higher point of equalization, pointing to additional delays introduced by programming the \gls{dsa} through \gls{intel:dml}. \par
\todo{submission method still makes a difference for 1mib, therefore recommend even larger transfers}
Another limitation may be observed in this result, namely the inherent throughput limit per \gls{dsa} chip of close to 30 GiB/s. This is apparently caused by I/O fabric limitations \cite[p. 5]{intel:analysis}. \par
In Figure \ref{fig:perf-submitmethod} we conclude that with transfers of \(1\ MiB\) and upwards, the cost of single submission drops. As there is still a slight difference, datum size should be even larger. For smaller transfers the performance varies greatly, with batch operations leading in throughput. Reese Kuper et al. observed that \enquote{SWQ observes lower throughput between \(1-8\ KB\) [transfer size]} \cite[pp. 6]{intel:analysis}. We however observe a much higher point of equalization, pointing to additional delays introduced by programming the \gls{dsa} through \gls{intel:dml}. Another limitation may be observed in this result, namely the inherent throughput limit per \gls{dsa} chip of close to \(30\ GiB/s\). This is apparently caused by I/O fabric limitations \cite[p. 5]{intel:analysis}. \par
\subsection{Multithreaded Submission} \subsection{Multithreaded Submission}
\label{subsec:perf:mtsubmit} \label{subsec:perf:mtsubmit}
As we might encounter access to one \gls{dsa} from multiple threads through the associated \glsentrylong{dsa:swq}, understanding the impact of this type of access is crucial. We benchmark multithreaded submission for one, two, and twelve threads, with the latter representing the core count of one processing sub-node on the test system. We spawn multiple threads, all submitting to one \gls{dsa}. Furthermore, we perform this benchmark with sizes of 1 MiB and 1 GiB to examine, if the behaviour changes with submission size. For smaller sizes, the completion time may be faster than submission time, leading to potentially different effects of threading due to the fact that multiple threads work to fill the queue, preventing task starvation. We may also experience lower-than-peak throughput with rising thread count, caused by the synchronization inherent with \gls{dsa:swq}. \par
As we might encounter access to one \gls{dsa} from multiple threads through the associated \glsentrylong{dsa:swq}, understanding the impact of this type of access is crucial. We benchmark multithreaded submission for one, two, and twelve threads, with the latter representing the core count of one processing sub-node on the test system. We spawn multiple threads, all submitting to one \gls{dsa}. Furthermore, we perform this benchmark with sizes of \(1\ MiB\) and \(1\ GiB\) to examine, if the behaviour changes with submission size. For smaller sizes, the completion time may be faster than submission time, leading to potentially different effects of threading due to the fact that multiple threads work to fill the queue, preventing task starvation. We may also experience lower-than-peak throughput with rising thread count, caused by the synchronization inherent with \gls{dsa:swq}. \par
\begin{figure}[!t] \begin{figure}[!t]
\centering \centering
\includegraphics[width=0.5\textwidth]{images/plot-mtsubmit.pdf} \includegraphics[width=0.5\textwidth]{images/plot-mtsubmit.pdf}
\caption{Throughput for different Thread Counts and Sizes. Multiple threads submit to the same Shared Work Queue. Performing a copy with source and destination being node 0, executed by the DSA on node 0.}
\caption{Throughput for different Thread Counts and Sizes. Multiple threads submit to the same Shared Work Queue. Performing a copy with source and destination being \glsentryshort{numa:node} 0, executed by the DSA on \glsentryshort{numa:node} 0.}
\label{fig:perf-mtsubmit} \label{fig:perf-mtsubmit}
\end{figure} \end{figure}
In Figure \ref{fig:perf-mtsubmit}, we note that threading has no discernible negative impact. The synchronization appears to affect single-threaded access in the same manner as it does for multiple threads. Interestingly, for the smaller size of 1 MiB, our assumption proved accurate, and performance increased with the addition of threads, which we attribute to enhanced queue usage. We ascribe the higher throughput observed with 1 GiB to the submission delay which is incurred more frequently with lower transfer sizes. \par
In Figure \ref{fig:perf-mtsubmit}, we note that threading has no discernible negative impact. The synchronization appears to affect single-threaded access in the same manner as it does for multiple threads. Interestingly, for the smaller size of \(1\ MiB\), our assumption proved accurate, and performance increased with the addition of threads, which we attribute to enhanced queue usage. We ascribe the higher throughput observed with \(1\ GiB\) to the submission delay which is incurred more frequently with lower transfer sizes. \par
\subsection{Data Movement from \glsentryshort{dram} to \glsentryshort{hbm}} \subsection{Data Movement from \glsentryshort{dram} to \glsentryshort{hbm}}
\label{subsec:perf:datacopy} \label{subsec:perf:datacopy}
Moving data from \gls{dram} to \gls{hbm} is most relevant to the rest of this work, as it is the target application. As we discovered in Section \ref{subsec:perf:submitmethod}, one \gls{dsa} has a peak bandwidth limit of 30 GiB/s. For each node, the test system is configured with two DIMMs of DDR5-4800. \par
The naming scheme contains the data rate in Megatransfers per second. We calculate the transfers performed per second. \cite{kingston:ddr5-spec-overview} \par
Moving data from \glsentryshort{dram} to \gls{hbm} is most relevant to the rest of this work, as it is the target application. With \gls{hbm} offering higher bandwidth than the \glsentryshort{dram} of our system, we will be restricted by the available bandwidth of the source. To determine the upper limit achievable, we must calculate the available peak bandwidth. For each \gls{numa:node}, the test system is configured with two DIMMs of DDR5-4800. The naming scheme contains the data rate in Megatransfers per second, however the processor specification notes that, for dual channel operation, the maximum supported speed drops to \(4400\ MT/s\) \cite{intel:xeonmax-ark}. We calculate the transfers performed per second for one \gls{numa:node}, followed by the bytes per transfer \cite{kingston:ddr5-spec-overview} and at last combine these two for the theoretical peak bandwidth per \gls{numa:node} on the system. \par
\[2\ DIMM * \frac{4800\ MT}{s\ *\ DIMM} = 9600\ MT/s\]
\[2\ DIMM \times \frac{4400\ MT}{s\ \times\ DIMM} = 8800\ MT/s\]
The data width of DDR5 is 64 bit. We calculate the amount of Bytes per Transfer. \cite{kingston:ddr5-spec-overview} \par
\[\frac{64b}{8b/B}\ /\ T = 8\ B/T\]
\[\frac{64b}{8b/B}\ /\ T = 8B\ /\ T\]
\[8800\ MT/s \times 8B/T = 70400 \times 10^6 B/s = 65.56\ GiB/s\]
Using the results from the previous calculations, we are now able to calculate the theoretical peak throughput speed per Node on our test system. \par
From the observed bandwidth limitation of a single \gls{dsa} situated at about \(30\ GiB/s\) (see Section \ref{subsec:perf:submitmethod}) and the available memory bandwidth of \(65.56 GiB/s\), we conclude that a copy task has to be split across multiple \gls{dsa}s to achieve peak throughput. Different methods of splitting will be evaluated. Given that our system consists of multiple sockets, communication crossing between sockets could introduce latency and bandwidth disadvantages \cite{bench:heterogeneous-communication}, which we will also evaluate. Beyond two \gls{dsa}, marginal gains are to be expected, due to the throughput limitation of the available memory. \par
\[9600\ MT/s * 8B/T = 75\ GiB/s\]
To determine the optimal amount of \gls{dsa}s, we will measure throughput for one, two, four, and eight participating in the copy operations. We name the utilization of two \gls{dsa}s \enquote{Push-Pull}, as with two accelerators, we utilize the ones found on data source and destination \gls{numa:node}. As eight \gls{dsa}s is the maximum available on our system, this configuration will be referred to as \enquote{brute-force}. \par
We conclude that to achieve peak speeds, a copy task has to be split across multiple \gls{dsa}s \todo{mention why we conclude this}. Different methods of splitting will be evaluated. Given that our system consists of multiple sockets, communication crossing between sockets could introduce latency and bandwidth disadvantages \cite{bench:heterogeneous-communication}, which we will also evaluate. \par
To determine the optimal amount of \gls{dsa}s, we will measure throughput for one, two, four, and eight participating in the copy operations. We name the utilization of two \gls{dsa}s \enquote{Push-Pull}, as with two accelerators, we utilise the ones found on data source and destination node. As eight \gls{dsa}s is the maximum available on our system, this configuration will be referred to as \enquote{brute-force}. \par
For this benchmark, we transfer 1 Gibibyte of data from node 0 to the destination node. We present data for nodes 8, 11, 12, and 15. To understand the selection, see Figure \ref{fig:perf-xeonmaxnuma}, which illustrates the node IDs of the configured systems and the corresponding storage technology. Node 8 accesses the \gls{hbm} on node 0, making it the physically closest possible destination. Node 11 is located diagonally on the chip, representing the farthest intra-socket operation benchmarked. Nodes 12 and 15 lie diagonally on the second socket's CPU, making them representative of inter-socket transfer operations. \par
For this benchmark, we transfer \(1\ GiB\)ibyte of data from \gls{numa:node} 0 to the destination \gls{numa:node}. We present data for \gls{numa:node}s 8, 11, 12, and 15. To understand the selection, see Figure \ref{fig:perf-xeonmaxnuma}, which illustrates the \gls{numa:node} IDs of the configured systems and the corresponding storage technology. \gls{numa:node} 8 accesses the \gls{hbm} on \gls{numa:node} 0, making it the physically closest possible destination. \gls{numa:node} 11 is located diagonally on the chip, representing the farthest intra-socket operation benchmarked. \gls{numa:node}s 12 and 15 lie diagonally on the second socket's CPU, making them representative of inter-socket transfer operations. \par
\begin{figure}[!t] \begin{figure}[!t]
\centering \centering
@ -148,39 +126,41 @@ For this benchmark, we transfer 1 Gibibyte of data from node 0 to the destinatio
\label{fig:perf-dsa} \label{fig:perf-dsa}
\end{figure} \end{figure}
We begin by examining the common behaviour of load balancing techniques depicted in Figure \ref{fig:perf-dsa}. The real-world peak throughput approaches nearly 64 GiB/s, aligning with the maximum achievable with the CPU, as demonstrated in Section \ref{subsec:perf:cpu-datacopy}. In Figure \ref{fig:perf-dsa:1}, a notable hard bandwidth limit is observed, just below the 30 GiB/s mark, reinforcing what was encountered in Section \ref{subsec:perf:submitmethod}: a single \gls{dsa} is constrained by I/O-Fabric limitations. \par
We begin by examining the common behaviour of load balancing techniques depicted in Figure \ref{fig:perf-dsa}. The real-world peak throughput of \(64\ GiB/s\) approaches the calculated available bandwidth. In Figure \ref{fig:perf-dsa:1}, a notable hard bandwidth limit is observed, just below the \(30\ GiB/s\) mark, reinforcing what was encountered in Section \ref{subsec:perf:submitmethod}: a single \gls{dsa} is constrained by I/O-Fabric limitations. \par
Unexpected throughput differences are evident for all configurations, except the bandwidth-bound single \gls{dsa}. Notably, \gls{numa:node} 8 performs worse than copying to \gls{numa:node} 11. As \gls{numa:node} 8 serves as the \gls{hbm} accessor for the data source \gls{numa:node}, it should have the shortest data path. This suggests that the \gls{dsa} may suffer from sharing parts of the data path for reading and writing. Another interesting observation is that, contrary to our assumption, the physically more distant \gls{numa:node} 15 achieves higher throughput than the closer \gls{numa:node} 12. We lack an explanation for this anomaly and will further examine this behaviour in the analysis of the CPU throughput results in Section \ref{subsec:perf:cpu-datacopy}. \par Unexpected throughput differences are evident for all configurations, except the bandwidth-bound single \gls{dsa}. Notably, \gls{numa:node} 8 performs worse than copying to \gls{numa:node} 11. As \gls{numa:node} 8 serves as the \gls{hbm} accessor for the data source \gls{numa:node}, it should have the shortest data path. This suggests that the \gls{dsa} may suffer from sharing parts of the data path for reading and writing. Another interesting observation is that, contrary to our assumption, the physically more distant \gls{numa:node} 15 achieves higher throughput than the closer \gls{numa:node} 12. We lack an explanation for this anomaly and will further examine this behaviour in the analysis of the CPU throughput results in Section \ref{subsec:perf:cpu-datacopy}. \par
For the results of the Brute-Force approach illustrated in Figure \ref{fig:perf-dsa:8}, we observe peak speeds when copying across sockets from \gls{numa:node} 0 to \gls{numa:node} 15. This contradicts our assumption that peak bandwidth would be limited by the interconnect. However, for intra-node copies, there is an observable penalty for using the off-socket \gls{dsa}s. We will analyse this behaviour by comparing the different benchmarked configurations and summarize our findings on scalability. \par For the results of the Brute-Force approach illustrated in Figure \ref{fig:perf-dsa:8}, we observe peak speeds when copying across sockets from \gls{numa:node} 0 to \gls{numa:node} 15. This contradicts our assumption that peak bandwidth would be limited by the interconnect. However, for intra-node copies, there is an observable penalty for using the off-socket \gls{dsa}s. We will analyse this behaviour by comparing the different benchmarked configurations and summarize our findings on scalability. \par
\todo{ensure consistent usage of gls for node}
\begin{figure}[!t] \begin{figure}[!t]
\centering \centering
\begin{subfigure}{0.225\textwidth}
\begin{subfigure}[t]{0.35\textwidth}
\centering \centering
\includegraphics[width=\textwidth]{images/plot-average-throughput.pdf} \includegraphics[width=\textwidth]{images/plot-average-throughput.pdf}
\caption{Average Throughput for different amounts of participating \gls{dsa}.} \caption{Average Throughput for different amounts of participating \gls{dsa}.}
\label{fig:perf-dsa-analysis:average} \label{fig:perf-dsa-analysis:average}
\end{subfigure} \end{subfigure}
\hspace{5mm} \hspace{5mm}
\begin{subfigure}{0.55\textwidth}
\begin{subfigure}[t]{0.35\textwidth}
\centering \centering
\includegraphics[width=\textwidth]{images/plot-dsa-throughput-scaling.pdf} \includegraphics[width=\textwidth]{images/plot-dsa-throughput-scaling.pdf}
\caption{Scaling Factor for different amounts of participating \gls{dsa}. Determined by formula \\ \(\frac{Throughput}{Basline\ Throughput} * \frac{1}{Utilization\ Factor}\) \\ with the baseline being Throughput for 1 \gls{dsa} and the utilization factor representing the factor of the amount of \gls{dsa}s being used over the baseline.}
\caption{Scaling Factor for different amounts of participating \gls{dsa}. Determined by formula \(Throughput\ /\ Baseline \times 1\ /\ Utilization\) with the Baseline being Throughput for one \gls{dsa} and the Utilization representing the factor of the amount of \gls{dsa}s being used over the baseline.}
\label{fig:perf-dsa-analysis:scaling} \label{fig:perf-dsa-analysis:scaling}
\end{subfigure} \end{subfigure}
\caption{Scalability Analysis for different amounts of participating \gls{dsa}s. Displays the average throughput and the derived scaling factor. Shows that, although the throughput does increase with adding more accelerators, beyond two, the gained speed drops significantly. Calculated over the results from Figure \ref{fig:perf-dsa} and therefore applies to copies from \glsentryshort{dram} to \glsentryshort{hbm}.} \caption{Scalability Analysis for different amounts of participating \gls{dsa}s. Displays the average throughput and the derived scaling factor. Shows that, although the throughput does increase with adding more accelerators, beyond two, the gained speed drops significantly. Calculated over the results from Figure \ref{fig:perf-dsa} and therefore applies to copies from \glsentryshort{dram} to \glsentryshort{hbm}.}
\label{fig:perf-dsa-analysis} \label{fig:perf-dsa-analysis}
\end{figure} \end{figure}
\todo{tp scaling shows assumption: limited benefit of adding more than 2 dsa}
When comparing the Brute-Force approach with Push-Pull in Figure \ref{fig:perf-dsa:2}, performance decreases by utilizing four times more resources over a longer duration. As shown in Figure \ref{fig:perf-dsa-analysis:scaling}, using Brute-Force still leads to a slight increase in overall throughput, although far from scaling linearly. Therefore, we conclude that, although data movement across the interconnect incurs additional cost, no hard bandwidth limit is observable. \todo{we state that it decreases but then that it increases} \par When comparing the Brute-Force approach with Push-Pull in Figure \ref{fig:perf-dsa:2}, performance decreases by utilizing four times more resources over a longer duration. As shown in Figure \ref{fig:perf-dsa-analysis:scaling}, using Brute-Force still leads to a slight increase in overall throughput, although far from scaling linearly. Therefore, we conclude that, although data movement across the interconnect incurs additional cost, no hard bandwidth limit is observable. \todo{we state that it decreases but then that it increases} \par
From the average throughput and scaling factors in Figure \ref{fig:perf-dsa-analysis}, it becomes evident that splitting tasks over more than two \gls{dsa}s yields only marginal gains. This could be due to increased congestion of the overall interconnect, however, as no hard limit is encountered, this is not a definitive answer. \par From the average throughput and scaling factors in Figure \ref{fig:perf-dsa-analysis}, it becomes evident that splitting tasks over more than two \gls{dsa}s yields only marginal gains. This could be due to increased congestion of the overall interconnect, however, as no hard limit is encountered, this is not a definitive answer. \par
The choice of a load balancing method is not trivial. If peak throughput of one task is of relevance, Brute-Force for inter-socket and four \gls{dsa}s for intra-socket operation result in the fastest transfers. At the same time, these cause high system utilization, making them unsuitable for situations where multiple tasks may be submitted. For these cases, Push-Pull achieves performance close to the real-world peak while also not wasting resources due to poor scaling — see Figure \ref{fig:perf-dsa-analysis:scaling}. \par The choice of a load balancing method is not trivial. If peak throughput of one task is of relevance, Brute-Force for inter-socket and four \gls{dsa}s for intra-socket operation result in the fastest transfers. At the same time, these cause high system utilization, making them unsuitable for situations where multiple tasks may be submitted. For these cases, Push-Pull achieves performance close to the real-world peak while also not wasting resources due to poor scaling — see Figure \ref{fig:perf-dsa-analysis:scaling}. \par
\todo{dont choose brute here, 4dsa is best}
\subsection{Data Movement using CPU} \subsection{Data Movement using CPU}
\label{subsec:perf:cpu-datacopy} \label{subsec:perf:cpu-datacopy}
@ -194,7 +174,7 @@ For evaluating CPU copy performance we use the benchmark code from the previous
\caption{DML code for allnodes running on software path.} \caption{DML code for allnodes running on software path.}
\label{fig:perf-cpu:swpath} \label{fig:perf-cpu:swpath}
\end{subfigure} \end{subfigure}
\hspace{8mm}
\hspace{5mm}
\begin{subfigure}[t]{0.35\textwidth} \begin{subfigure}[t]{0.35\textwidth}
\centering \centering
\includegraphics[width=\textwidth]{images/plot-andrepeak-throughput.pdf} \includegraphics[width=\textwidth]{images/plot-andrepeak-throughput.pdf}
@ -217,7 +197,7 @@ In Figure \ref{fig:perf-cpu:andrepeak}, peak throughput is achieved for intra-no
In this section we summarize the conclusions drawn from the three benchmarks performed in the sections above and outline a utilization guideline \todo{we dont do this, either write it or leave this out}. We also compare CPU and \gls{dsa} for the task of copying data from \gls{dram} to \gls{hbm} \todo{weird wording}. \par In this section we summarize the conclusions drawn from the three benchmarks performed in the sections above and outline a utilization guideline \todo{we dont do this, either write it or leave this out}. We also compare CPU and \gls{dsa} for the task of copying data from \gls{dram} to \gls{hbm} \todo{weird wording}. \par
\begin{itemize} \begin{itemize}
\item From \ref{subsec:perf:submitmethod} we conclude that small copies under 1 MiB in size require batching and still do not reach peak performance. Task size should therefore be at or above 1 MiB and otherwise use the CPU \todo{why otherwise cpu, no direct link given}.
\item From \ref{subsec:perf:submitmethod} we conclude that small copies under \(1\ MiB\) in size require batching and still do not reach peak performance. Task size should therefore be at or above \(1\ MiB\) and otherwise use the CPU \todo{why otherwise cpu, no direct link given}.
\item Section \ref{subsec:perf:mtsubmit} assures that access from multiple threads does not negatively affect the performance when using \glsentrylong{dsa:swq} for work submission. Due to the lack of \glsentrylong{dsa:dwq} support, we have no data to determine the cost of submission to the \gls{dsa:swq}. \item Section \ref{subsec:perf:mtsubmit} assures that access from multiple threads does not negatively affect the performance when using \glsentrylong{dsa:swq} for work submission. Due to the lack of \glsentrylong{dsa:dwq} support, we have no data to determine the cost of submission to the \gls{dsa:swq}.
\item In \ref{subsec:perf:datacopy}, we found that using more than two \gls{dsa}s results in only marginal gains. The choice of a load balancer therefore is the Push-Pull configuration, as it achieves fair throughput with low utilization. \item In \ref{subsec:perf:datacopy}, we found that using more than two \gls{dsa}s results in only marginal gains. The choice of a load balancer therefore is the Push-Pull configuration, as it achieves fair throughput with low utilization.
\end{itemize} \end{itemize}
@ -226,8 +206,6 @@ Once again, we refer to Figures \ref{fig:perf-dsa} and \ref{fig:perf-cpu}, both
We discovered an anomaly for \gls{numa:node} 12 for which we did not find an explanation. As the behaviour is also exhibited by the CPU, discovering the root issue falls outside the scope of this work. \par We discovered an anomaly for \gls{numa:node} 12 for which we did not find an explanation. As the behaviour is also exhibited by the CPU, discovering the root issue falls outside the scope of this work. \par
Scaling was found to be less than linear. No conclusive answer was given for this either. We assumed that this happens due to increased congestion of the interconnect. \par
\todo{mention measures undertaken to find an explanation here} \todo{mention measures undertaken to find an explanation here}
Even though we could not find an explanation for all measurements, this chapter still gives insight into the performance of the \gls{dsa}, its strengths and its weaknesses. It provides data-driven guidance on a complex architecture, helping to find the optimum for applying the \gls{dsa} to our expected and possibly different workloads. \par Even though we could not find an explanation for all measurements, this chapter still gives insight into the performance of the \gls{dsa}, its strengths and its weaknesses. It provides data-driven guidance on a complex architecture, helping to find the optimum for applying the \gls{dsa} to our expected and possibly different workloads. \par

Loading…
Cancel
Save