You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
89 lines
7.6 KiB
89 lines
7.6 KiB
\chapter{Evaluation}
|
|
\label{chap:evaluation}
|
|
|
|
% Zu jeder Arbeit in unserem Bereich gehört eine Leistungsbewertung. Aus
|
|
% diesem Kapitel sollte hervorgehen, welche Methoden angewandt worden,
|
|
% die Leistungsfähigkeit zu bewerten und welche Ergebnisse dabei erzielt
|
|
% wurden. Wichtig ist es, dem Leser nicht nur ein paar Zahlen
|
|
% hinzustellen, sondern auch eine Diskussion der Ergebnisse
|
|
% vorzunehmen. Es wird empfohlen zunächst die eigenen Erwartungen
|
|
% bezüglich der Ergebnisse zu erläutern und anschließend eventuell
|
|
% festgestellte Abweichungen zu erklären.
|
|
|
|
In this chapter we will define our expectations, applying the developed Cache to \glsentrylong{qdp}, and then evaluate the observed results. The code used is described in more detail in Section \ref{sec:impl:application}. \par
|
|
|
|
\section{Benchmarked Task}
|
|
\label{sec:eval:bench}
|
|
|
|
The benchmark executes a simple query as illustrated in Figure \ref{fig:qdp-simple-query}. We will from hereinafter use notations \(SCAN_a\) for the pipeline that performs scan and subsequently filter on column \texttt{a}, \(SCAN_b\) for the pipeline that prefetches column \texttt{b} and \(AGGREGATE\) for the projection and final summation step. We use a column size of 4 GiB. The work is divided over multiple groups and each group spawns threads for each pipeline step. For a fair comparison, each tested configuration uses 64 Threads for the first stage (\(SCAN_a\) and \(SCAN_b\)) and 32 for the second stage (\(AGGREGATE\)), while being restricted to run on \gls{numa:node} 0 through pinning. For configurations not performing prefetching, \(SCAN_b\) is not executed. We measure the times spent in each pipeline, cache hit percentage and total processing time. \par
|
|
|
|
Pipelines \(SCAN_a\) and \(SCAN_b\) execute concurrently, completing their workload and then signalling \(AGGREGATE\), which then finalizes the operation. With the goal of improving cache hit rate, we decided to loosen this restriction and let \(SCAN_b\) work freely, only synchronizing \(SCAN_a\) with \(AGGREGATE\). Work is therefore submitted to the \gls{dsa} as frequently as possible, thereby hopefully completing each caching operation for a chunk of \texttt{b} before \(SCAN_a\) finishes processing the associated chunk of \texttt{a}. \par
|
|
|
|
\section{Expectations}
|
|
\label{sec:eval:expectations}
|
|
|
|
The simple query presents a challenging scenario to the \texttt{Cache}. As the filter operation applied to column \texttt{a} is not particularly complex, its execution time can be assumed to be short. Therefore, the \texttt{Cache} has little time during which it must prefetch, which will amplify delays caused by processing overhead in the \texttt{Cache} or during accelerator offload. Additionally, it can be assumed that the task is memory bound. As the prefetching of \texttt{b} in \(SCAN_b\) and the load and subsequent filter of \texttt{a} in \(SCAN_a\) will execute in parallel, caching therefore directly reduces the memory bandwidth available to \(SCAN_a\), when both columns are located on the same \gls{numa:node}. \par
|
|
|
|
Due to the challenges posed by sharing memory bandwidth we will benchmark prefetching in two configurations. The first will find both columns \texttt{a} and \texttt{b} located on the same \gls{numa:node}. We expect to demonstrate the memory bottleneck in this situation by execution time of \(SCAN_a\) rising by the amount of time spent prefetching in \(SCAN_b\). The second setup will see the columns distributed over two \gls{numa:node}s, still \glsentryshort{dram}, however. In this configuration \(SCAN_a\) should only suffer from the additional threads performing the prefetching. \par
|
|
|
|
\section{Observations}
|
|
|
|
In this section we will present our findings from applying the \texttt{Cache} developed in Chatpers \ref{chap:design} and \ref{chap:implementation} to \gls{qdp}. We begin by presenting the results without prefetching, representing the upper and lower boundaries respectively. \par
|
|
|
|
\begin{figure}[h!tb]
|
|
\centering
|
|
\begin{subfigure}[t]{0.75\textwidth}
|
|
\centering
|
|
\includegraphics[width=\textwidth]{images/plot-timing-dram.pdf}
|
|
\caption{Columns \texttt{a} and \texttt{b} located on the same \glsentryshort{dram} \glsentryshort{numa:node}.}
|
|
\label{fig:timing-comparison:baseline}
|
|
\end{subfigure}
|
|
\begin{subfigure}[t]{0.75\textwidth}
|
|
\centering
|
|
\includegraphics[width=\textwidth]{images/plot-timing-hbm.pdf}
|
|
\caption{Column \texttt{a} located in \glsentryshort{dram} and \texttt{b} in \glsentryshort{hbm}.}
|
|
\label{fig:timing-comparison:upplimit}
|
|
\end{subfigure}
|
|
\caption{Time spent on functions \(SCAN_a\) and \(AGGREGATE\) without prefetching for different locations of column \texttt{b}. Figure (a) represents the lower boundary by using only \glsentryshort{dram}, while Figure (b) simulates perfect caching by storing column \texttt{b} in \glsentryshort{hbm} during benchmark setup.}
|
|
\label{fig:timing-comparison}
|
|
\end{figure}
|
|
|
|
Our baseline will be the performance achieved with both columns \texttt{a} and \texttt{b} located in \glsentryshort{dram} and no prefetching. The upper limit will be represented by measuring the scenario where \texttt{b} is already located in \gls{hbm} at the start of the benchmark, simulating prefetching with no overhead or delay. \par
|
|
|
|
\begin{figure}[h!tb]
|
|
\centering
|
|
\begin{subfigure}[t]{0.75\textwidth}
|
|
\centering
|
|
\includegraphics[width=\textwidth]{images/plot-timing-prefetch.pdf}
|
|
\caption{Prefetching with columns \texttt{a} and \texttt{b} located on the same \glsentryshort{dram} \glsentryshort{numa:node}.}
|
|
\label{fig:timing-results:prefetch}
|
|
\end{subfigure}
|
|
\begin{subfigure}[t]{0.75\textwidth}
|
|
\centering
|
|
\includegraphics[width=\textwidth]{images/plot-timing-distprefetch.pdf}
|
|
\caption{Prefetching with columns \texttt{a} and \texttt{b} located on different \glsentryshort{dram} \glsentryshort{numa:node}s.}
|
|
\label{fig:timing-results:distprefetch}
|
|
\end{subfigure}
|
|
\caption{Time spent on functions \(SCAN_a\), \(SCAN_b\) and \(AGGREGATE\) with prefetching. Operations \(SCAN_a\) and \(SCAN_b\) execute concurrently. Figure (a) shows bandwidth limitation as time for \(SCAN_a\) increases drastically due to the copying of column \texttt{b} to \glsentryshort{hbm} taking place in parallel. For Figure (b), the columns are located on different \glsentryshort{numa:node}s, thereby the \(SCAN\)-operations do not compete for bandwidth.}
|
|
\label{fig:timing-results}
|
|
\end{figure}
|
|
|
|
\begin{itemize}
|
|
\item Fig \ref{fig:timing-results:distprefetch} aggr for prefetch theoretically at level of hbm but due to more concurrent workload aggr gets slowed down too
|
|
\item Fig \ref{fig:timing-results:prefetch} scana increase due to sharing bandwidth reasonable, aggr seems a bit unreasonable
|
|
\end{itemize}
|
|
|
|
\todo{consider benchmarking only with one group and lowering wl size accordingly to reduce effects of overlapping on the graphic}
|
|
|
|
\begin{table}[h!tb]
|
|
\centering
|
|
\input{tables/table-qdpspeedup.tex}
|
|
\caption{Table showing Speedup for different \glsentryshort{qdp} Configurations over \glsentryshort{dram}. Result for \glsentryshort{dram} serves as baseline while \glsentryshort{hbm} presents the upper boundary achievable with perfect prefetching. Prefetching was performed with the same parameters and data locations as \gls{dram}, caching on Node 8 (\glsentryshort{hbm} accessor for the executing Node 0). Prefetching with Distributed Columns had columns \texttt{a} and \texttt{b} located on different Nodes.}
|
|
\label{table:qdp-speedup}
|
|
\end{table}
|
|
|
|
\section{Discussion}
|
|
|
|
%%% Local Variables:
|
|
%%% TeX-master: "diplom"
|
|
%%% End:
|