From 27c57aa4ce4e7739de9090875cb07006df2aa8fc Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Constantin=20F=C3=BCrst?= Date: Mon, 5 Feb 2024 00:42:05 +0100 Subject: [PATCH] add node for content of evaluation chapter --- thesis/content/60_evaluation.tex | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/thesis/content/60_evaluation.tex b/thesis/content/60_evaluation.tex index f5c286b..2967ca4 100644 --- a/thesis/content/60_evaluation.tex +++ b/thesis/content/60_evaluation.tex @@ -14,7 +14,7 @@ In this chapter we will define our expectations, applying the developed Cache to \section{Expectations} -\begin{figure}[h] +\begin{figure}[h!tb] \centering \includegraphics[width=0.7\textwidth]{images/simple-query-graphic.pdf} \caption{Illustration of the benchmarked simple query in (a) and the corresponding pipeline in (b). Taken from \cite[Fig. 1]{dimes-prefetching}.} @@ -25,6 +25,8 @@ The benchmark executes a simple query as illustrated in Figure \ref{fig:eval-sim With this difficult scenario, we expect to spend time analysing runtime behaviour of our benchmark in order to optimize the Cache and the way it is applied to the query. Optimizations should yield slight performance improvement over the baseline, using DRAM, and will not reach the theoretical peak, where the data for \texttt{b} resides in HBM. \par +Consider using parts of flamegraph. Same speed as dram, even though allocation is performed in the timed region and not before. Mention dml performs busy waiting (cite dsa-paper 4.4 for use of interrupts mentioned in arch), optimization with weak wait. Mention optimization weak access for prefetching scenario. + \section{Observation and Discussion} %%% Local Variables: